Data Ownership In The Track & Trace Cloud

Cloud over IllinoisImportant Notice To Readers of This Essay On November 27, 2013, President Barack Obama signed the Drug Quality and Security Act of 2013 into law. That act has many provisions, but one is to pre-empt all existing and future state serialization and pedigree laws like those that previously existed in California and Florida. Some or all of the information contained in this essay is about some aspect of one or more of those state laws and so that information is now obsolete. It is left here only for historical purposes for those wishing to understand those old laws and the industry’s response to them.Who will own the data that supply chain trading partners store in some future cloud-based, semi-centralized Network Centric ePedigree (NCeP) data repository?  I met one potential future repository service provider who seemed to think that they would own that data.  Imagine their excitement.  All the data about where drugs go throughout the supply chain!  Think of the value they could mine from that.

Well, that’s never going to happen because companies in the supply chain won’t sign up for handing over all of their supply chain data to some third-party just so they can comply with regulations, especially when there exists an alternative approach that would allow them to avoid using a third-party and still comply (by using DPMS).  And regulatory agencies are not trying to destroy a company’s significant source of revenue (from selling this data) by imposing track & trace and ePedigree regulations.  That’s not the intent of these laws.

In earlier essays I’ve referred to the semi-centralized NCeP as a “cloud-based” repository and based on some comments I’ve received privately and publicly, I think I have introduced some confusion.  When talking about data ownership in “the cloud”, we need to be a little more specific about what “the cloud” is.  It turns out, there are four different types of cloud-based data repositories and they each have different implications about who owns the data stored in them.

Public Cloud
An internet data repository that is made available by a third-party service provider to the general public for free (for a limited amount of space) or pay-per-use (for larger space).  Access controls are simple and data ownership rights can be nebulous (read the service provider’s terms and conditions carefully).

Private Cloud
A data repository that is operated solely for the private use of a single entity.  Access controls are very tight and data ownership is never in doubt.

Community Cloud
A data repository that is operated for use by a limited set of organizations that have some common interest or concern.  Access controls are usually simple and data ownership rights are either not well defined or are defined in a way that the community owns the data stored.

Semi-private Cloud, or Hybrid Cloud
An internet data repository that is made available by a third-party service provider to a limited set of companies under contract.  Access controls are very tight and data ownership can be well defined by the contract.

It’s the semi-private cloud that is most applicable to the NCeP application.  I would argue that the centralized and semi-centralized NCeP concepts directly imply a semi-private cloud as the very foundation of their implementations.  Contracts will carefully identify precisely who owns each morsel of data and specify who can access them under which conditions.  The contracts must come first, the technology to implement the conditions spelled out in them will come second.  The pilots we have seen to date have tested the technology but done little to test the contracts that will be needed.


So if the operator of the repository technology won’t own the data stored there, who should own it?  I’m not a lawyer, and I’m sure there is centuries-worth of case law that differentiates possession, custody and ownership in many scenarios.  I’ll leave the full review of that to someone else, because, to me, it’s fairly obvious who should own the data.

The company who writes a given record into an NCeP repository should expect to retain all rights of ownership of the data it contains.  If the contract they sign doesn’t spell it out that way, well, there is always DPMS.  What I mean by that is, a semi-private cloud-based NCeP has no hope of being used as a viable alternative to DPMS unless companies know in advance that they will not lose any data ownership rights.  That’s why, as a practical matter, the contract language will have to come before the technology is used.

I’ve written a little about this issue before (see “Who owns supply chain visibility data?).


But the whole point of an NCeP is to enable controlled data sharing…with the emphasis on the word “controlled”.  The NCeP architecture must provide very advanced, very granular rules that data owners can choose to invoke for each trading partner or class of trading partner that will enable them to control exactly who gets to have access to what.  The default sharing setting should be “not shared” except to fulfill ePedigree regulations.  This is what I am calling “intentional and targeted” data sharing.  The only data that is shared is the data that the owner intends to share and only with the targeted organization.

With this level of control, supply chain data owners would actually be able to sell more of their data to those who are willing to pay for it.  And use of point-to-point Electronic Data Interchange (EDI) messages to convey this information would no longer be necessary.  The NCeP repository would become the point of delivery to the consuming organization with the selling organization simply invoking the rule(s) that expose(s) the data to the buying organization(s).  Pricing for access to the data would be a negotiating point solely between the data owner and the prospective buyer(s) and would most likely be a part of the existing Distribution Services Agreements (DSA) negotiating process.  (Click here for an example of an old DSA for pharmaceuticals.)

Sharing controls must also have a configurable termination time/date so that from the beginning, the termination of the rules can be set to the termination time/date of the sharing agreement.  Termination would then be automatic unless the agreement is terminated early for some reason.

Data ownership recognition and control will have to be figured out before a centralized or semi-centralized NCeP will be successful.  I hope you can see how important this work is.  As I’ve reported before, the GS1 Pedigree Security, Choreography and Checking Services (PSCCS) work group is now figuring out how to create standards that will enable many of these things.  But time is short.  After the standards are set, solution providers will need to develop offerings that conform to them.  Contract language will need to be developed and everything will need to be tested before it can be used.

Like I pointed out in last week’s essay, if this is all going to be used for California compliance, we’ll need to hear from the Board of Pharmacy to know that they will accept it or else companies won’t have the confidence to keep the development ball rolling.  But that’s a bit of a Catch-22 since the Board doesn’t like to say in advance exactly what will comply and what won’t.  Maybe we can get Michael Ventura to ask them directly.


3 thoughts on “Data Ownership In The Track & Trace Cloud”

  1. Dirk, I love it – “Well, that’s never going to happen ….”. That is in fact never going to happen. The fear factors revolving around the ownership (or, let’s just call it, “control”) of enterprise data are so strong. But there are technologies coming along – like those blogged about by you in The Significance of the Abbott, McKesson and VA Pilot – which will provide companies with the opportunity to reveal more data, and granularly so. On the consumer side of the equation the push by the big search engines toward “semantic search” provides greater opportunities for the kind of search that brings one single answer to a consumer’s query on their mobile technology. That’s what Google’s “I’m Feeling Lucky” button is for. And that’s no misnomer because one has to be extremely lucky to get a single “spot on” answer to a question about a product. It is oft said that there is already “too much” data and that we are “awash” in data. But when it comes to the availability of data about products in supply chains, there is a dearth of availability. Where did the product come from? Is the product a composite of many different sources and what are those sources? Were sustainable measures used in producing the product? Was slave labor used? What are the safety concerns? What do I do in the event of a recall? Within all of these consumer questions (and many, many more) lays the opportunity. And it’s these questions that Google, Bing, Apple, et al. want to answer with their semantic search engines.

  2. Piling on — “Well, that’s never going to happen…” no its not — scalable distributed cloud enterprise systems are not going to come from traditional enterprise vendors (nor SQL database vendors).

    Instead the solution architecture is going to come from the web 2.0 business architecture and open source NoSQL database projects incubated at on-line social networks like Facebook, search engines like Google, and selective sharing from the NSA.

  3. Dirk,
    As always, I appreciate the essays and insights. I do want to take exception with how you descrive “cloud” alternatives. You make the Public and Community clouds sound as if they are porous and offer insignificant security. This is FAR from the case. A lot of companies use cloud for large scale services with excellent security. Private or semi-private is better used to describe access than security.

    The argument over data ownership is as old as the industry and seems to be geting “worn out” to many people to whom I speak. I suspect that many companies are hold on to data because they’ve always done this and not because of any real value. That’s my rant for the day.

Comments are closed.