What you’ll learn in this blog
- Why do unplanned information requests hurt your team?
- Why are emerging contaminants drawing C-Suite attention?
- What are the three levels of digital maturity for remediation programs?
- What is the Enterprise Business Model?
If you’re managing a remediation program, there are few words that will unnerve you more than “unplanned information request.”
These C-Suite-driven deep dives have been occurring more frequently in recent years. And, depending on the nature of the requests, in many cases, remediation teams find themselves completely unprepared. They’re essentially asked to drop everything and focus on these information requests, which are often centered around emerging contaminants, regulatory factors, risks associated with the environment and human health, remedy selection and operation, and stakeholder expectations.
Simply put, these requests are costly. Not only are they expensive in terms of the direct cost of the project itself, but also due to the indirect costs of the surrounding disruption.
Breaking Down the Task
Any project manager knows that having a clear and concise work breakdown structure is essential for understanding how to execute a task. When it comes to tackling enormous information requests (yes, those are projects just like anything else we do in Remediation), this is especially true.
Take a look at this sample breakdown:
The highest-level task is, of course, fulfilling the request. We can break that down into major activities on level two (the blue-shaded boxes)—clarifying the assignment; identifying your data sources; acquiring the data; cleansing the data; and consolidating, verifying, and then preparing your deliverable.
Underneath that level is where the work actually happens. We can see a huge span of tasks and activities that are required to fulfill that overall data request. Depending on the magnitude and the schedule of the information request, the number and time cost of these activities will vary—but if they land on your plate, you can pretty much kiss your existing to-do list goodbye.
The activities in the center of this chart, namely “Acquiring the data,” “Cleansing the data,” and “Consolidating the data,” are by far the most time-consuming and costly. The two biggest factors that affect your ability to complete those tasks are your organization’s digital maturity and your technical capability (we’ll go into those a bit further down).
Taking a Proactive Stance
Mitigating the pain of these information requests starts with implementing concrete strategies. Before we dig in a bit deeper, here are some top-level tenets that can be applied to help put your organization in the best possible position before knowledge requests end up on your desks.
- Automate where possible
- Create a knowledge base
- Predict trends and strategize
- Systems thinking
- Data governance
- Environmental information modeling
- Enterprise-level business application
Central to these unplanned information requests are emerging contaminants. Right now, we’re seeing an unprecedented regulatory and media focus on these. When primetime news shows are featuring segments on PFASs, you know it's a hot topic.
As a matter of concern, emerging contaminants rank very high with the latest CEO survey data from PriceWaterhouseCoopers and others that have been monitoring these trends. They’re realizing they need to know the answers to pertinent questions: who's responsible for the stewardship of property and particularly contaminated lands? what types of risks do we face? what are the uncertainties? and what are we going to do about it?
For environmental remediation managers and consultants, these questions tend to manifest as high-priority data requests.
“Many people believe that information management is a low cost for them in the decentralized model. In reality, it's the highest.”
A Look at the Three Major Strategies for Storing Data
So, when you get asked to fulfill a large information request, you have to ask the first question which is “where are our data sources?” And the answer to that is, it depends.
The Decentralized Model
On the left, you can see the most common data storage strategy—a decentralized one.
In this model, your data is scattered all over the place, in many different formats. You might find data with your laboratories, with your current consultants, or with your former consultants (they have offices and data centers all over the world themselves!). It may be strewn about your own internal networks and hard drives, and some may even be on paper.…You get the point. This is the lowest level of technical maturity.
One thing we’ve found is that many people believe that information management is a low cost for them in the decentralized model. In reality, it's the highest. It's a hidden cost because you're not getting charged for software: instead, you are basically being charged professional service fees to manage that information in a decentralized way.
“Data in this model is impounded and kept from the end users until it's been released or signed off on by this gatekeeper. This is a very laborious and expensive journey.”
The Hybrid Model
In the middle of the illustration, you’ll see the hybrid model. This is one step up in terms of technical maturity - and the organization has generally retained a consulting company to house the data and to perform data quality management. However, data in this model is impounded and kept from the end users until it's been released or signed off on by the gatekeeper. This is a very laborious and expensive journey.
Another significant problem with this approach is that it's generally a single purpose application—single purpose in being that it's designed for analytical data. Maybe sample locations or field data are involved, but there is generally a single purpose. It also does not involve managing project portfolio information. It's not managing financial information. It's not helping with engagement with the supply chain, and so on.
The Enterprise Model
The enterprise model on the right is the highest level of digital maturity. It also has the lowest total cost of ownership. And it has the highest level of data velocity: that is to say, it is extremely efficient at getting data from the field and lab into the hands of the end users. The enterprise model also allows multiple purpose information management activities and business workflow.
To dig in one step deeper, in the remediation industry, you have the environmental consultants, environmental contractors, and laboratories that are all generating data. There's a lot of workflow leading up to the point of having information that's ready for delivery to the customer or to the regulators. The suppliers and labs are doing all of the front end work around sample event planning, communicating orders to laboratories, and conveying task requirements to field contractors. They're collecting both field data and lab samples. They're tracking samples and delivery dates with the laboratories. They're receiving EDD’s and reports from the laboratories. These activities are repeated over and over again on thousands of sites and sample events for each consultancy that's managing this work. Then the critical data quality management processes really ramp up. QA/QC, data verification, and sometimes data validation are on the journey to produce final data that is fit for use.
The best consulting companies do this very well—others, not so much. But as the owner of the environmental projects or liability, it’s not really your role to tell the consultants how they will do their work leading up to the final deliverable. The owner wants a high-quality “final” deliverable (EDD and Report) loaded into their enterprise system (i.e., ENFOS) in the shortest amount of time from sample collection. This approach is the Non-Invasive Data Stewardship Model, pioneered by ENFOS.
Using API’s or EDD’s in an automated process, consultants push their final data and other deliverables to the primary enterprise system owned by the customer. And what this produces is clean technical data within the owners' enterprise system for all of their sites, regardless of the past or current consultant assignments. Now there is “data in context” as the technical data is immediately joined with the other primary data categories (site and financial information). Joined data sets allow for business intelligence and programmatic analysis using multivariable methods and filters. These type of analysis strategies are designed to produce insights that are not readily available when data silos exist. And, finally, data and other content is always available and is always in a “final” status, making those emergency reporting requests feel like normal task and not a show stopper.
So, when you consider a work breakdown and the bulk of all that work—identifying data sources, acquiring data, cleansing data, and consolidating data—remember that they're all eliminated with an enterprise model. This gives organizations the best opportunity to be able to respond to information requests about emerging contaminants, but also any other type of requests that might come along.
Adopting this kind of proactive model is a massive leap forward when it comes to data maturity and the ease with which you are able to interface with your data.