A robotics demo is impressive. The hardware moves. The vision system detects. The arm picks and places. Everyone in the room nods.
Then implementation begins. And the real work starts — not with the robot, but with everything around it.
ERP. LIMS. PMS. Access control systems. Compliance logging. HR workflows. Network segmentation. Security review. Change management. The robot is the easy part. Plugging it into an enterprise that has 20 years of accumulated systems, processes, and people — that is where most deployments get stuck.
The IT/OT Convergence Problem Nobody Talks About
Enterprise operations run on IT systems — ERP, LIMS, PMS, document management. Robotics runs on OT systems — real-time controllers, sensor networks, proprietary communication protocols.
These two worlds have different latency requirements, different security models, different update cadences, and different failure modes. IT systems tolerate downtime for patches. OT systems do not. IT security assumes a perimeter. OT security was never designed for network connectivity at all.
When you deploy a robot into a hospital, a hotel, or a QC lab, you are forcing these two worlds to share data in real time. The robot needs to read task queues from the PMS. It needs to write completion records to the compliance log. It needs to receive priority overrides from the supervisor dashboard. Every one of these touchpoints is a place where the integration can break — and where, if it breaks silently, nobody notices until something important goes wrong.
Data Standardisation Is Not a Technical Problem — It's a Political One
Every enterprise system speaks a different dialect. SAP has its data model. Oracle has a different one. The LIMS in a QC lab almost certainly has a custom schema built by the team that implemented it a decade ago. The PMS in a hotel has fields and status codes specific to that property's configuration.
A robot that generates task completion records needs those records to land in the right place, in the right format, with the right field mappings. Getting those mappings agreed upon — between the robotics vendor, the system integrator, the IT team, the operations team, and the compliance officer — is not a technical challenge. It is a coordination challenge requiring alignment across teams that rarely speak to each other.
Most robotics vendors hand this problem to the customer. They provide an API. They document the schema. Then they wait for the customer to figure out how to connect it to everything else.
The Training Data Problem Nobody Wants to Admit
Modern robotics systems learn from operational data. The model that drives autonomous navigation, task prioritisation, and exception handling improves as it sees more examples from the real environment.
The uncomfortable truth: that data does not exist before deployment. It cannot be pre-generated in a warehouse or a test facility. A robot learning to navigate a specific hotel corridor needs data from that corridor — the actual floor texture, the actual lighting conditions, the actual pattern of human foot traffic at different times of day.
This means every enterprise deployment starts with a period of reduced performance. The robot is slower. It makes more exceptions. It requires more human oversight. Enterprises need to be prepared for this — and vendors need to be honest about it rather than presenting polished demo performance as the baseline expectation.
The implication is direct: the enterprise must generate its own training data. A vendor who cannot support a supervised learning phase — with tooling for humans to label exceptions, flag errors, and confirm correct behaviours — is selling a product that will plateau rather than improve after deployment.
What Good Enterprise Integration Actually Looks Like
Four things separate deployments that work from deployments that get quietly shelved after six months:
- Middleware ownership. The robotics vendor owns the integration layer — not the customer's IT team and not a third-party integrator. When the ERP schema changes, the vendor updates the connector. The customer does not manage the bridge.
- Bidirectional data flow. The robot writes to enterprise systems, and enterprise systems write to the robot. Task assignments, priority changes, zone access rules — all of these flow from the enterprise into the robot's task model, not just from the robot out.
- Supervised learning tooling. A dashboard where operations staff can review robot decisions, flag incorrect behaviour, and confirm exceptions — generating labelled data that feeds back into the model. This is what makes performance compound over time.
- Compliance by design. Every action logged with timestamp, operator ID, and confirmation state — not as an add-on, but built into the task model so compliance data is generated automatically as a byproduct of normal operation.
These are not advanced features. They are the minimum requirements for a deployment that an enterprise can actually rely on.
The robotics industry is still in a phase where impressive demos substitute for difficult conversations about integration requirements. That is starting to change — as more organisations reach the implementation phase and realise that the demo was the simple part.
The vendors who survive the next five years will be the ones who knew that from the beginning.