How do DevOps teams lever maximum value from their aging data assets?
The American statistician W. Edwards Deming is quoted to have said ‘Without data, you’re just another person with an opinion’. Data matters and it’s central to digital transformation. The problem is no matter how much companies invest in their data, and however they capture it, the quality will always be suspect. Without data, any ideas of digital transformation is likely to be a pipe dream.
Most companies hoard gigabytes of data on their finances, their products, their customers and markets. The difficulty is that almost no enterprise has their data organized in a structure that makes it easy to access. As soon as business leaders come up with ideas for business model reinvention, probably the next thought in the minds of DevOps leaders is ‘Where is the data coming from?’
Digital transformation projects have a habit of either generating new data (as in the case of sensor network centric projects) or reusing old data (such as plotting assets or customers on a map and gaining value from location-centric perspectives), or a blend of the two. Re-using data found within the enterprise can be challenging because of data quality issues, the variations of data structures and field formats between applications and issues getting data out of systems. This means DevOps teams need to have very good data management skills.
So how do DevOps teams approach their data challenges? Here are some examples of how DevOps teams are reinventing their old data with Encanvas.
1. Mashing data
Old data may be held in various applications and formats. It’s not uncommon for Encanvas to gather information from spreadsheets, big back-office systems databases like SAP R3, IBM DB2, Microsoft Dynamics and SQL – all at the same time. Encanvas is a plug-and-play multi-threaded and multi-sourcing platform which means designers can create concurrent live data feeds from multiple systems or endpoints at the same time. This capability is used extensively by designers when creating applications that reuse data from existing and new systems together, creating new data structures on the fly for the specific canvases they author as part of applications under development.
2. Special filters
Your old data may require filtering to select only the records relevant to your project. A powerful feature built into Encanvas’ mash-up environment is our special filter which allows designers to employ drag and drop controls to instantly create very powerful data filtering on inbound data from third party sources. Any number of filters can be applied to tables at the same time. For example, if a designer wants to only ingest data from a customer table of a specific type, and that relates to a specific region, they can create special filters for ‘types’ and ‘regions’ selecting only the records that apply to those conditions. All of this rich configuration is done without any coding and doesn’t influence the integrity of the ingested table, or its potential for reuse in its native form by other applications (or canvases).
3. Enriching or validating data with third-source data
If your old data can benefit from being enriched by other sources of data, Encanvas’ mash-up capabilities can really bring value by making the internal and external data accessible to applications designers without having to use coding or API to build new integrations. For example, back in 2002, we helped a client to create an Encanvas application that would integrate Lotus 123 customer data with a third party industry database to enrich customer records so that the client company could produce refined searches of their customers using fine-grained drop-down filters – using data that didn’t exist in their database!
4. Cleansing and transforming data
Sometimes old data requires cleansing at the point of transfer from its original location using a machine to machine cleansing and transforming process to shed unwanted data and apply transformation rules to re-order, de-dupe and relocate data to new data structures. Encanvas Information Flow Designer (IFD) is the machine-to-machine software module built into our architecture that equips designers with the means to configure these ETL actions and normalize data before it gets ingested into applications. IFD also automates the generation of notices to alert designers (and users too if necessary) that transformations have worked – or not. Transformations can be triggered by events, scheduled times, watch folder changes and a variety of other means.
5. Quarantining data
A powerful (and pretty unique) feature of Encanvas lies in its ability to create quarantining protocols for old data that fails to live up to your expectations for data integrity. There’s little point uploading records that are unfit for purpose. If you are gathering customer records for example and would determine that records that fail to have any contact email, telephone or mobile numbers included are not suitable for use, then designers can create quarantining rules that filter this data out for special treatment. In such cases, the data remains ‘in the system’ but is no longer visible to users until it has been manually or machine cleaned.
6. Applying voting systems to ingested data sources and end-points
It may be that old data is being ingested from multiple systems or endpoints and you need to create a new data mart that has to prioritize the best likely source of good quality data over others. This can get really complicated because different systems may create new data at different speeds and this can create latency issues but, nevertheless, Encanvas has the code less tooling to enable designers to author voting systems to vote on which source is most trusted. Voting systems can use algorithms to automatically test data integrity and then automatically augment the voting structure, or they can be manual, where the data owner or manager uses a sliding scale of trust levels to determine which source is proving to generate the best results (or both!).
7. Creating new data
When there are gaps in your old data, there are many ways that Encanvas can create new data as part of its application design. For example, the numeric controls of Encanvas allow designers to create formulas and calculations on data to total columns, sum value, source averages etc. that may be required for your new dashboards and reports but do not exist in the ingested data. Encanvas also has the ability to ingest SQL script and DLLs to make it easy for DevOps teams to reuse existing code blocks or create new APIs and transformations.
8. Location-centricity of data
Another way to create new data is by using Encanvas’ mapping capabilities to apply location-data to existing addresses and locations. Encanvas has an integrated – and codeless – mapping engine (sometimes referred to as Geo Spatial Intelligence, or ‘GIS’). It allows designers to plot and pin records on maps. The geo-data of records is added to the data-set (companies like Google and Microsoft charge lots of money to do this!).
9. IoT API
Parachute-in a high profile technology-centric team with a strong leader into an organization with an existing IT department it’s hardly surprising that you’re going to have to put out some fires and smooth over a few ruffles.
Balancing two-speed IT means having an internal IT teams focused on reducing costs and improving process efficiencies through Business Transformation (BX) and a DevOps team re-inventing business models through Digital Transformation (DX) in tandem. Recognizing each team for its own skills and contributions to business outcomes and balancing praise is going to be important for a healthy culture.
10. Building a wholly new data structure
We’ve saved the most dramatic way of fixing old data quality issues until last – because it’s no small project to build a new data warehouse to gather and reorganize data into new structures but sometimes it’s the most sustainable way to ensure that data integrity is preserved for the life of your application. For mission-critical processes, it’s probably the best quality outcome although the time and investment needed to create a data warehouse or enterprise data-hub is definitely ‘none trivial’. Encanvas includes all of the codeless tooling needed to fast-track the creation of new data warehouses and data marts using the data repository of your choice – whether you are moving towards a big data solution like Hadoop or are seeking a more traditional data structure like SQL or DB2.
So there you have it – ten new ways to turn your old data into useful, data for your next digital transformation. To find out more about the capabilities of Encanvas DX, please contact our team.