AI Machine Vision HALCON MERLIC 2026 is entering a new phase. Artificial intelligence in machine vision is moving beyond inspection accuracy and into lifecycle management, edge deployment, hybrid AI approaches, and real production integration.
We spoke with Dr. Maximilian Lückenhaus from MVTec to understand how HALCON and MERLIC are evolving, where customers are seeing real value, and what the next two to three years could bring for AI-driven vision systems.
MTN: MVTec has been a major name in machine vision for decades. What was the original vision behind HALCON and later MERLIC?
Dr. Lückenhaus:
MVTec started in November 1996 in Munich as a spin-off from the Technical University of Munich and FORWISS. From the beginning, our vision was industrial-grade machine vision that is robust, fast, and ready for real production constraints.
That shaped HALCON early on. Its first market version already included 3D camera calibration in June 1997, and later we expanded into embedded platforms.
MERLIC followed in 2014 to remove the barrier that you must be a programmer. It lets users build complete vision applications through a graphical workflow.
MTN: What are the most important AI capabilities that have been added recently to HALCON?
Dr. Lückenhaus:
A key step was Continual Learning for Classification in the latest HALCON release in November 2025. It lets users update models with few images, add classes, and avoid catastrophic forgetting.
We also focused on real-world speed. HALCON 25.11 optimized deep learning models for classification and code reading, and introduced faster Deep OCR models for resource-constrained devices.
In HALCON 25.05 we advanced hybrid AI for robotics with Deep 3D Matching for robust pose estimation and bin picking, including training with synthetic data generated from CAD models.
MTN: As AI becomes more embedded in manufacturing, how is MERLIC’s role evolving?
Dr. Lückenhaus:
MERLIC’s role is to make AI-based vision operational for production teams. Thanks to MERLIC’s graphical user interface and powerful tools, users can create complete machine vision applications easily and intuitively – without any programming. With its “all-in-one” approach, the no-code software covers the entire process: from image acquisition, image processing, integrated communication interfaces, to visualization of the results.
Recent releases focus on integration and process reliability. That includes improved communication plug-ins, a Linux frontend, container deployment examples, stronger error handling, and Siemens Industrial Edge connectivity.
MTN: How do HALCON and MERLIC complement each other, and how should customers choose between them?
Dr. Lückenhaus:
They are complementary by design.
Choose HALCON if you need maximum flexibility, custom logic, deep integration into your own software stack, or advanced engineering workflows such as multi-camera or complex robotics.
Choose MERLIC if you want fast time-to-result with a graphical approach, straightforward PLC integration, and a packaged runtime for production.
MTN: How do you balance rule-based machine vision with deep learning?
Dr. Lückenhaus:
We treat deep learning as another tool, not a replacement. Rule-based methods still win when you need determinism, strict explainability, or minimal data.
Where variability is high, deep learning is often the right choice. The best results frequently come from hybrids, such as Deep 3D Matching or Deep OCR workflows that combine classic methods with AI.
MTN: What have been the most surprising customer use cases so far?
Dr. Lückenhaus:
Two things stand out. First, how far customers push machine vision beyond classic inspection, even into extreme environments. HALCON supported NASA’s humanoid robot R2 on the International Space Station because of strong 3D vision capabilities.
Second, how quickly teams turn AI experiments into production once workflows become maintainable in terms of data, retraining, deployment, and monitoring.
MTN: What are the biggest technical challenges when integrating advanced AI into production?
Dr. Lückenhaus:
The hardest challenges are practical ones: data scarcity, labeling effort, domain shift over time, compute limits at the edge, and validation or compliance.
We address these with our Deep Learning Tool that reduces labeling and training friction, lifecycle features like Continual Learning. Besides we offer compliance enablers such as SBOMs in HALCON 25.11 to support modern security and regulatory requirements.
MTN: How is MVTec adapting to AI accelerators and edge computing?
Dr. Lückenhaus:
We optimize both software and pre-trained models for constrained hardware. HALCON 25.11 targets faster inference and efficient model support, while Continual Learning is designed to work in edge environments.
We also collaborate with hardware partners, including Siemens industrial PCs with embedded AI acceleration and Qualcomm NPUs for smart cameras. Our AI² Accelerator Interface allows customers to use supported AI accelerator hardware such as Nvidia TensorRT or Intel OpenVINO.
MTN: Looking ahead two or three years, what should manufacturers expect in deep learning and machine vision?
Dr. Lückenhaus:
The next leap will come from AI as a framework rather than a single technology. AI will automate many tasks in machine vision, including setting the right parameters for applications.
This will also increase the importance of rule-based processes, because they can then be used more efficiently. We are working on interfaces to AI agents to simplify development and deployment of machine vision applications.
MTN: Where do you see generative AI intersecting with industrial vision?
Dr. Lückenhaus:
I see two main intersections.
First is synthetic data generation to reduce data bottlenecks. Generating labeled training data from CAD for Deep 3D Matching already points in that direction.
Second is AI-assisted development and operations. Assistants can help users navigate tooling, generate boilerplates, and speed up engineering.
MTN: How important are partnerships and ecosystem integration for scaling AI vision?
Dr. Lückenhaus:
Interoperability will decide whether AI vision scales. Partnerships that align software, computers, cameras, PLCs, and industrial platforms matter most.
Our Technology Partner Program is key here, with collaborations such as Siemens and Qualcomm focused on integrated performance and predictable deployment.
MTN: What is the biggest misconception about AI in industrial vision today?
Dr. Lückenhaus:
The biggest misconception is that AI will magically replace engineering. In industry, success depends on data strategy, validation, and maintainable deployment.
AI tools can help with parameter selection and speed up development, but many vision tasks are very specific and still require experienced application engineers to create robust, 24/7 production solutions.
MTN Analysis: AI Vision Moves from Accuracy to Lifecycle Control
Three themes stand out from this conversation.
Continual learning is becoming a production requirement
Manufacturing environments change constantly. The ability to update models with minimal data and without full retraining is becoming a core capability.
Hybrid AI is replacing single-method approaches
The strongest results are coming from systems that combine deterministic vision methods with deep learning, delivering both flexibility and reliability.
AI is becoming an operational framework
The next stage is AI acting as a layer across the entire vision workflow, from parameter selection to deployment and maintenance.
For machine tool users, this signals a clear shift. AI vision is moving from experimental pilots toward stable, maintainable production systems, with integration, lifecycle tools, and edge performance becoming the deciding factors.




