We use cookies to enhance your experience on our website. Please read and confirm your agreement to our Privacy Policy and Terms and Conditions before continue to browse our website.

LLM Engineer (Data and Optimization)

Report
Print

LLM Engineer (Data and Optimization)

TCL Corporate Research (Hong Kong) Co., Limited
Apply Now

Job Description
As a Large Model Algorithm Engineer, you will focus on the technical research and development of Large Language Models (LLMs) and multimodal large models, driving their application in industrial vertical domains. You will participate in core processes such as model training, optimization, and inference deployment while collaborating with top university research teams to explore cutting-edge technologies and improve model performance and efficiency.

Key Responsibilities

  • Responsible for training, fine-tuning (SFT), and system deployment of vertical domain large models, promoting efficient application of large models in industrial environments.
  • Research and implement compression and optimization techniques for large models, including pruning, quantization, and knowledge distillation, to improve inference efficiency and deployment performance.
  • Participate in the algorithm design and development of RAG (Retrieval-Augmented Generation) and Agent modules to enhance reasoning capabilities in dynamic and complex environments.
  • Research and apply multimodal understanding technologies to optimize the application of Large Vision Models (LVM) in industrial vision and other fields.
  • Translate business rules into efficient workflow code and participate in the design and implementation of Agentic Workflow to enhance workflow intelligence.
  • Build industry datasets to support large model training and applications, including data preprocessing, pretraining data construction, and training/application/evaluation dataset setup.
  • Research and implement large model merging techniques, exploring collaborative optimization solutions for multiple models.
  • Develop and maintain validation, evaluation, and performance monitoring processes for large models to ensure system stability and usability.
  • Participate in the development and optimization of large model application platforms (microservices) to enhance system modularity and usability.

Qualifications

  • Master’s degree or higher in Mathematics, Electrical Engineering, Computer Science, Data Science, or a related field is preferred.
  • Proficient in machine learning, deep learning, and Transformer architecture, with hands-on experience in end-to-end training and development of large models.
  • Familiar with large model compression and optimization methods such as pruning, quantization, and knowledge distillation.
  • Strong capabilities in large-scale data processing and familiarity with big data tools (e.g., Hadoop, Spark), with experience in data preprocessing, cleaning, and building training datasets.
  • Skilled in Python, C/C++, and Linux programming, with a solid foundation in algorithms and data structures.
  • Familiar with mainstream large model training and inference frameworks such as PyTorch, Hugging Face (HF), DeepSpeed, PEFT, vLLM, TRL, etc.
  • Knowledge of Triton or other high-performance inference tools, with experience in applying model optimization to real-world deployments.
  • Proficient in Docker and Linux shell scripting; experience with FastAPI development is a plus.
  • Experience in enterprise-level large model development, optimization, deployment, and tool development is preferred.
  • Strong teamwork and communication skills, capable of collaborating with cross-domain teams.
  • Passionate about cutting-edge large model technologies and their applications in industrial vertical domains.

Bonus Points

  • Experience in RAG systems and Agent module development and optimization.
  • Familiarity with CUDA programming, distributed computing, or related high-performance computing technologies.
  • Publications in top-tier conferences (e.g., NeurIPS, ICLR, CVPR).
  • Knowledge of hardware acceleration technologies (e.g., GPU, TPU) and their applications in model optimization.

 

All applications applied through our system will be delivered directly to the advertiser and privacy of personal data of the applicant will be ensured with security.

More Information

SalaryN/A (Search your salary info in SalaryCheck)
Job Function
Location
  • Hong Kong Science Park
Work Model
  • On-site / At the workplace
Industry
Employment Term
  • Full-time
Experience
  • N/A
Education
  • Master's degree
  • Degree