In the remediation industry, portfolio performance, spend management and even forecasting ability all stem from an unlikely factor: where and how you manage and store your data.
The approach that companies take when it comes to data management usually falls into one of three baskets: The decentralized model, the hybrid model, or the enterprise model.
By the end of this blog, you’ll understand precisely how each of those strategies affects a company’s ability to generate performance from its remediation program.
So, let’s get into it.
In the decentralized model, your data is scattered all over the place in many different formats (silos). Some data will be stored with your laboratories, some with your current consultants, some with your former consultants — it ends up sprawled throughout offices and data centers all over the world. Internally, it may be strewn about your own networks and hard drives, and some may even be on — *gulp* — paper.
This model represents the lowest level of technical maturity. One thing we’ve found is that many people believe that information management is a low cost for them in the decentralized model. But in reality, it's actually the highest. It's a hidden cost because you're not getting charged for software — you're basically getting charged professional service fees to manage that information in a decentralized way. And in the least efficient and effective way.
Next up is the hybrid model.
This is one step up in terms of technical maturity. Usually, in these cases, the organization has retained a consulting company to house the data and perform data quality management. However, data in this model is impounded and kept from the end-users until it has been released or signed off on by this gatekeeper. This is a very laborious and expensive journey.
Another significant problem with this approach is that it's generally a single-purpose application, single-purpose being that it's designed for analytical data, maybe sample locations, maybe field data, but it's generally a single purpose.
It's not also managing project portfolio information. It's not managing financial information. It's not helping with engagement with the supply chain, and so on. These data scope limitations end up breeding shadow systems for important activities like data preparation, joining of data from multiple source systems, and analytics.
The enterprise model is the highest level of digital maturity.
It also has the lowest total cost of ownership with the highest level of data velocity, meaning that getting data from the field and lab into the hands of end-users is a friction-free process. The enterprise model allows for multiple-purpose information management activities and business workflow. This is a comprehensive data management approach that by its very nature, joins all important data categories such as site information, financial information, and technical data in one centralized system.
This isn’t just a simple advantage — in the remediation industry, you have environmental consultants, environmental contractors, and laboratories that are generating data. There's a lot of workflows leading up to the point of having information that's ready for deliverables to the customer or to the regulators: The suppliers and labs are doing all of the front-end work around sample event planning, communicating orders to laboratories, and to field contractors. They're collecting both field data and lab samples. They're tracking samples with laboratories and tracking deliverable dates with the laboratories. They're receiving data inputs from the laboratories and so on. These activities are repeated over and over again on 1000s of sites and sample events for each consultancy that's managing this work.
Once the data has been finalized, it has gone through all of the QA and QC associated with the data quality management plans. The information has been vetted and signed off internally within the consulting organizations. The final deliverable of the customer's data to the enterprise system is the preferred workflow. And what that does is putsclean data into the enterprise system that already has the foundations for sites and projects established within it. It already has the financial information associated with those projects, both the lifecycle estimates as well as the short-term scopes of work and costs. And it can be brought in via an API or an EDD and immediately joined with the rest of the content, giving the customer data and context. ENFOS calls this the Non-Invasive Data Management approach. In a nutshell, the data providers (lab to consultant) use their existing software in this model. But when data has been deemed final, the data providers simply provide their customer (via the enterprise system) with the final data deliverable.
As the cherry on top, customers are able to integrate their enterprise/ENFOS data containing all those components with other sources of data or with a single source of data, and involve that in their data warehouse and business intelligence applications.
Closing thoughts
Ultimately, not only does the enterprise model vastly reduce the amount of friction and time it takes to retrieve, process, and analyze data — it eliminates low-value working hours spent identifying data sources, acquiring data, cleansing data, and consolidating data.
The three models for storing remediation data
This sets up organizations to not only harness real insights within their data, but to also be prepared for any type of data inquiry that comes down the line. The benefits from adopting this proactive model only cumulate as your dataset expands.
Do you want to see where your own organization fits in terms of data maturity? Take the next step and ask ENFOS about the Business Capability Assessment to determine your untapped potential. Book Now.