RFID Stock Management–Architecture & Challenges
I’m currently working on an RFID based stock management solution – The high level goals of the solution is to ensure, the stock shown on the computer is the stock available in the store, correct products (in correct quantities) are displayed on the shop floor & to reduce stock loss by having a real-time visibility of what’s passing through the tills before it’s taken out of the store. As a start we are only doing this for clothing products where our items are source (factory) tagged with RFID tags and these items are then tracked from deliveries to sale & returns using various types of RFID readers like Handhelds, Fixed Portals, Security Gates and Click & Collect readers etc.
The hardware side of project is interesting but it’s mostly off the shelf readers & gates supplied by Motorola, Checkpoint & Nedap. These readers are doing the bulk of work and we run a simple integration agent on top of them to connect them to our software backend.
Our software backend is SOA based (REST) web services built with ASP.NET Web API & hosted on Windows. From design & implementation perspective, we use CQS, DDD & Event Sourcing and our domain entities are persisted (using NHibernate) in Oracle (Exadata) which is our master data store. We use SpecFlow/NCrunch to automate our acceptance testing and NUnit for unit testing.
We started this as a typical .NET project and had interesting challenges around performance & latency on the read path, which pushed us to do more & more work on the asynchronous write path. We started to separate our read & write stores and decided to build the read store completely in the cache based on the event stream we capture on the write pathh. We started with Couchbase with it’s memcached Data Bucket as our first choice for read store– Couchbase is a great technology, it converges the key-value & document store models into one great product. I love the power & simplicity of it’s map/reduce framework.
For us the Couchbase didn’t work as well as our latency was still high because of the computation involved on huge list of stock data. We needed to bring data streams from the cache into our service and compute variances and categorization etc and then store the computed results back in the cache.
Our next choice was Redis, the Sorted Set data structure in Redis aligned nicely with the data model we need to store & compute. I'm extremely impressed with the power of Redis and ability to run computation in the cache is exactly what we needed. Most of our computation can be done with a single union or intersection command on sorted sets which is a sub millisecond operation. We are actively building on this model and in future posts, I’ll share more details on our architecture and specifics of Redis usage. Our high level architecture looks like this…