About The Playbook
Korio is a group of professionals who came together to help companies re-platform and become agile and fast. We prefer to work with established companies in the mid-market who might be struggling with the transformation to digital. Collectively, we have gone through dozens of transformations. We have witnessed successes and failures.
Based on what we have learned, we have become quite opinionated on what works and what doesn't. This short post is part of a larger Playbook that we have assembled based on what we have learned.
For details on each on our approach to data and analytics, explore the links below.
As we learn more from our clients and if we think that the learning is: A) actionable, B) validated and C) doesn't state the obvious, then we will add it to our Playbook.
Remember, everything we propose can be done incrementally and with very little lead-time. If you want to adopt the plays in a more gradual manner, go ahead but consider setting each up as an experiment: have a hypothesis, measure the impact and act on what you learn.
The Play: Adopt an Enterprise Data Pipeline
Deciding what data to capture in our digital experiences and backoffice processes is hard work. Ultimately, it is easier - and better - to capture all events. To bring this point home, we probably need to define a few things.
Technically, a Data Pipeline can be thought of as a pipe that holds every meaningful event that any one of your systems generates. Think of it as a log of everything that happens on your systems - and any integrated 3rd party systems for that matter. Because disk space and bandwidth are relatively cheap, and because "data is the new oil", we err on the side of capturing everything.
Imagine the analytical capabilities you gain from, essentially, capturing everything.
Besides producing rich data for analytics, this same Data Pipeline acts as the backbone or "message bus" that allows us to be modular and incremental.
Unlike a traditional approach to capturing data in data bases, the Pipeline carries data associated with a business event from one module and broadcasts it (securely) to any other module that needs that data to do its job. The power of this can not be understated.
First, and most significantly, it allows us to add services over time, incrementally, that can extract more insight or simply do more with that upstream business event than we might be contemplating when we first built that upstream service.
Second, the Data Pipeline concept demands a strongly enforced discipline for defining data contracts/APIs and, more generally, Data Governance. Data Governance are the set of rules that ensure our data is of high quality and that it is secure.
Finally, with a Data Pipeline and the schemas or data contracts established, completely separate teams can conceive of and build downstream services. This is typically called "loose coupling". It means that the upstream team whose service is firing the events into the Pipeline (according to a contract or schema) doesn't need to know anything about the efforts of the downstream team that wants to consume that data for their service.
This may not sound like a big deal, but trust us, it is. We always recommend multiple Small Autonomous Teams (SAT) be assigned individual ownership of Value Maps and the system modules that support the implementation of those maps. In most cases, with Low-Code technology, each SAT might need 50% FTE of an assigned developer. If you have 12 SATs and 6 developers, each team can work independently on their critical segment of the platform without being gridlocked.
Why Bother
- Rich data, made available in a standardized way, is what drives advanced analytics and enables Small Autonomous Teams to do their best work as they seek to optimize their part of the platform
- Because the data is comprehensive and available in real-time it provides the fact-base for optimization in near real-time - and increases in speed mean lower cost and better solutions faster.
- The Data Pipeline unlocks parallel development and a sense of true ownership by Small Automous Teams
What To Avoid
- This is a new way of thinking about how systems are developed. While your analytics team will be thrilled with the volume of quality, well-governed data - your developers may not be as keen. Be sure to work closely with your developer teams to ensure they understand the how's and why's of working with asynchronous event streaming.
The Fallback
- We consider the Data Pipeline mandatory because of the criticality of rich data and because it unlocks parallel development by Small Autonomous Teams. It is possible to build modular systems without an asynchronous event stream, but, utlimately, it will prove to be much harder to do when compared to simply learning this new paradigm.