Pick-to-Light Packing Verification
Packing errors in an automotive kit warehouse were costing millions in annual warranty claims.
System Architecture • Systems Integration • Event-Driven Architecture
Background
Operators packed customer-specific kits for direct shipment in an automotive warehouse. Every packing error triggered warranty claims with financial penalties. The process required tight integration between floor operations, the warehouse management system, and the physical hardware.
Off-the-shelf pick-to-light systems worked from static, pre-configured pick lists. This warehouse ran a live management system with dynamic pick data changing between orders. No existing product could integrate with it.
The Solution
I led architecture and implementation of a custom pick-to-light system. The middleware owns all runtime state and integrates directly with the live warehouse management system. Edge devices handle immediate hardware response on the floor. A web interface gives supervisors configuration, monitoring, and override control.
The hardest part was getting the business logic right. I worked with the client's analysts to map their pick list lifecycle into a state machine, then built a system flexible enough to generalize across warehouse areas. This was as much an integration problem as an engineering one.
Deep Dive
Process Flow
The system processes one pick list at a time. The middleware pulls the next list from the database, marks it active, and lights the bins. The operator walks the lit path. At each bin, a proximity sensor confirms the pick and the placement into the bag. Both steps are verified. When all bins confirm, the list closes and the next one loads.
If anything goes wrong, the station locks. Five mispick types are detected: wrong bin, double pick, pick instead of place, place instead of pick, and pick during an active mispick. On any mispick, the triggering light flashes red, every other sensor interaction is blocked, and the tower alarm activates. A supervisor overrides to resume.
The web interface mirrors system state in real time, giving supervisors visibility into active lists, station status, and mispick events without being on the floor.
Architecture
Pick data lives in the warehouse management system (MSSQL). The middleware (FastAPI, SQLAlchemy, Docker) owns all runtime state and exposes an API for edge devices and the web interface. Each edge device is a Python process on a single-board computer that talks to its light controller over TCP, which drives lights, sensors, and the tower alarm via Modbus. The web interface (React, TypeScript) handles configuration, monitoring, and supervisor controls.
Centralizing state in the middleware was deliberate. One wrong pick costs real money. Distributed state that could disagree between hardware and the database was not an option. Edge devices stay simple. The middleware is the single source of truth for everything on the floor.
Hardware Integration
Each edge device runs Python on a single-board computer and is the only layer that talks to hardware. It receives commands as JSON over the API and translates them into Modbus commands over TCP to the controller, which drives pick lights, proximity sensors, and the tower alarm. Each light has a proximity sensor that verifies both the pick from the shelf and the placement into the bag. Buttons are disabled. Proximity is the only trigger, because a button press doesn't prove the operator actually reached into the bin.
Challenges
Business logic alignment. The client's warehouse management system had its own pick list lifecycle with specific states, transitions, and database update points. I worked directly with their analysts to map this lifecycle into the system's state machine. The system also needed to generalize across warehouse areas with different products, workflows, and physical layouts, all running on the same management system.
Mispick detection. Five conditions, each with its own detection path and lockout behavior. The logic was straightforward once defined, but physical sensor placement was not. Bin sizes vary. Operator reach varies. Some bins are fully open on top. Sensors had to be positioned so the operator could not physically bypass the proximity range on any bin, and where that was not possible, physical guides were added to restrict access.
What Didn't Work Initially
Hardware ID assignment. The controllers worked well for dynamic pick lists, but initial setup was painful. Every light needed pressing individually in sequence to assign IDs. At scale: slow, error-prone, no tooling.
I built a setup mode into the edge device, triggered through the API. It walks operators through zone by zone, colors each light awaiting a press, and extinguishes it on confirm. The web interface shows progress and handles ID mapping. It turned a frustrating manual process into something repeatable.
ID mapping across three systems. Hardware IDs had to map to warehouse product IDs, then to product names, then to pick sheet identifiers from a separate system. None shared a common key.
I built a configuration interface that surfaces all three namespaces side by side, lets admins establish mappings, and locks externally-owned columns.
Trade-offs
Event-driven over polling or message broker. With MQTT, every state transition turns into a message that something else has to receive, parse, and act on. The pick list lifecycle has enough states and edge cases that managing them through a broker introduces polling loops, callback chains, and ordering problems that make the logic harder to follow and harder to test. Direct event-driven communication between middleware and edge devices keeps the state transitions explicit, the topology simple, and the failure surface small.