Ever since the move from paper-based to electronic health records, people and organizations have been working to advance health care interoperability — the ability to exchange information between EHRs. The vision has been to interconnect EHRs and give providers the ability to look up a patient’s treatment history nationwide or even punch a few buttons on the computer to send supporting documentation for a referral to another provider. The idea is that the receiving provider, likewise equipped with an EHR, would simply import the information and browse the patient’s longitudinal medical record in support of the final diagnosis.
The HealthIT.gov dashboard shows that public policy initiatives, such as the CMS EHR incentive programs, have driven hospital-based EHR use into the 90 percentile range. In 2013, more than nine in ten (93%) hospitals possessed certified EHR technology. But, we have not made much progress to the utopia of unhindered information exchange, such as in the case of referrals mentioned above. Half of the hospitals cannot query patient health information from external sources, and more than half cannot get or send secure messages from their EHRs. In 2013, only 14% of physicians surveyed could electronically exchange information outside their organization.
Why does patient information exchange continue to elude us? Key findings from the eHealth Initiative’s 2014 Survey on Health Data Exchange singled out cost and technical challenges as the biggest inhibitors of interoperability. The survey further identified three major needs:
- Standardized pricing and integration solutions from vendors;
- Technology platforms capable of “plug and play”; and
- Federally mandated standards.
The large number of EHR systems, each with unique interface requirements, makes custom coding necessary when interfacing with proprietary methods or health information exchanges, which use interface standards from Integrating the Health Enterprise. Since this custom coding is a labor-intensive undertaking, we have to look elsewhere in order to reduce interface costs. Later attempts to make things simpler by using point-to-point secure email (The Direct project) and the Consolidated Clinical Document Architecture (C-CDA) have had partial success due to the one-way communication design and the inability of many EHR systems to successfully parse the XML information into the EHR.
With intent to create a federally mandated standard which enables plug and play, attention is now focused on the last two needs in the eHI survey. In fact, the federal government tasked JASON, an independent group of scientists who advise the government on matters of science and technology, with recommending ways to improve interoperability. Their report, titled, “A Robust Health Data Infrastructure,” was released in April 2014. ONC further commissioned a JASON report task force to study ways to improve health information interoperability. The task force recommended:
- Using a public (well known, open source, standards based) application programming interface (API) to interconnect systems through both push and pull; and
- Finding an intermediate level of data exchanged by the API that is not as ambiguous as HL7 version 2 and not as bulky as C-CDA but can still perform both document style and discrete data exchange.
Fortuitously, the experts over at Health Level Seven International were already working on a new way to interconnect systems, applying lessons learned from the practical implementation of HL7 version 2, which was published about 25 years ago, and version 3, which was published 10 years ago. This new standard — called Fast Healthcare Interoperability Resources, or FHIR — was modeled after the new Web-based technologies that worked well on a large scale for Google, LinkedIn, Facebook, Twitter, etc. Having reached a milestone called Draft Standard for Trial Use (DSTU), FHIR is in an incubation state where standards are rigorously tested. However, some proponents such as Cerner, Epic, Intermountain Health System and Boston Children’s Hospital have already built Web applications using it. This enthusiasm for adoption attests to the attractiveness of FHIR.
Because FHIR looks very promising, the Office of the National Coordinator for Health IT added this emerging standard to its 10-year interoperability vision in January. FHIR must, however, attain the status of a fully developed published standard before it can be mandated. To accelerate the development of FHIR to the next milestone DSTU R2, Health Level Seven International launched an initiative, called the Joint Argonaut Project, comprised of almost a dozen health systems and vendors including Epic and Cerner. In order to understand what more needs to be done to achieve standardization, let us examine FHIR in more detail.
FHIR uses a transport mechanism, a standard way to transfer data from one system to another, called Representational State Transfer, or REST. REST is a formal name for the way Google Chrome, Firefox, Internet Explorer and other browsers retrieve data from the Internet. REST has been in use since the World Wide Web was invented. It is stable, well-known and inexpensive to develop with it.
Once REST gets the data or payload from one system to another, the contents of the payload must still be understood. This is an area where FHIR shines. FHIR uses the detailed, elaborate, HL7 version 3 Reference Information Model to derive about 100 commonly used simple data objects called “resources,” such as “Patient, Allergy, Medication.” The resources can be accessed individually or in a package similar to a CDA document. The belief is that these resources are good enough for 80% of the use cases. The remaining 20% of uncommon cases, where more details or special elements are required, can extend the resources as needed. This is in keeping with FHIR’s philosophy to make the standard as easy as possible to implement. Nomenclature of medical terms is included in the definition of these resources so developers don’t have to be skilled at medical terms to write software. This leads to lower-cost systems — see where this is headed?
Any health care information system in the U.S. has to be secure for HIPAA compliance and other regulatory requirements. Data security in FHIR covers the transport, payload security and authentication (i.e., who can see what). The transport security already exists and is the same mechanism used by Internet browsers to do HTTPS — Transport Layer Security.
The payload access and authentication functions are provided by Open Authorization (OAuth2). OAuth2 is used by websites that provide the option for visitors to sign into a Google or Facebook account to authenticate themselves. Basically, Google or Facebook vouch for the visitor’s identity. Once the visitor’s identity is verified, additional information on what data they can access is included. While OAuth2 is a commonly used standard, more work is needed to ensure it operates correctly for health care.
It is very hard to get the entire medical community to agree on what path to take toward interoperability. FHIR seems to have the approval of a majority of stakeholders. It appears the standard to unlock data held in EHR silos is finally taking shape as innovators mix and match data from many sources to give us a quantum leap in productivity and a way out of EHR-specific ecosystems.
Source: California Healthline / iHealthBeat
www.californiahealthline.org / www.ihealthbeat.org