CMS Announces AI Challenge

March 28, 2019

At the Health Datapalooza event, the Centers for Medicare & Medicaid Services (CMS) announced a new competition that aims to accelerate innovative solutions to better predict health outcomes and improve the quality of care for patients. Following President Trump’s executive order to prioritize research and development of America’s artificial intelligence capabilities, the CMS Artificial Intelligence Health Outcomes Challenge will unleash innovative solutions as CMS continues to move the healthcare system towards value.

Partnering with the American Academy of Family Physicians and the Laura and John Arnold Foundation, the CMS AI Health Outcomes Challenge will engage with innovators from all sectors – not just from healthcare – to harness AI solutions to predict health outcomes for potential use by the in all CMS Innovation Center innovative payment and service delivery models.

“The Artificial Intelligence Health Outcomes Challenge is an opportunity for innovators to demonstrate how artificial intelligence tools – such as deep learning and neural networks – can be used to predict unplanned hospital and skilled nursing facility admissions and adverse events,” said CMS Administrator Seema Verma. “For artificial intelligence to be successful in healthcare, it must not only enhance the predictive ability of illnesses and diseases, but also enable providers to focus more time with patients. The power of artificial intelligence will truly be unleashed when providers understand and trust the data and predictions.”

Partnering with the American Academy of Family Physicians and the Laura and John Arnold Foundation, the Artificial Intelligence Health Outcomes Challenge will engage with innovators from all sectors – not just from healthcare – to harness artificial intelligence solutions to predict health outcomes. The Challenge aims to develop artificial intelligence-driven predictions that healthcare providers and clinicians participating in CMS Innovation Center models could use to reduce the burden to perform quality improvement activities and make quality measures more impactful.

The Artificial Intelligence Health Outcomes Challenge is a three stage competition that will begin with the Launch Stage, in which participants will submit an application at Up to 20 participants will be selected to participate in Stage 1 of the Challenge. We anticipate that more information about Stage 1 and Stage 2 will be announced later this year.

CMS and the partnering organizations will award up to $1.65 million in total to selected participants in Stage 1 and Stage 2. Prize amounts are subject to change. If selected for Stage 1, participants will develop algorithms that predict health outcomes from Medicare fee-for-service data, and strategies and methodologies to explain the artificial intelligence-driven predictions to frontline clinicians and physicians while building trust in the data. Participants in Stages 1 and 2 of the competition will use Medicare claims data sets provided by CMS to develop their algorithms and solutions.

Learn more about the CMS AI Challenge here:

Deputy Secretary of HHS Speaks at Health Datapalooza

March 27, 2019

Eric D. Hargan, Deputy Secretary of the Department of Health and Human Services (HHS), spoke at Health Datapalooza. Below are his prepared remarks:

Good morning, and thank you for inviting me to speak to you all today. It’s always an honor to speak at Datapalooza, but especially this year, when this gathering marks its tenth anniversary.

We’re all here, just like we were last year, because we care about data, and we like talking about the technology we can develop with it. But I want to think about how data and technology fit into the bigger picture: Why is data important? Why are we all at a conference devoted to it?

It’s not only so all of us have an excuse to get in touch with our inner nerds—although that’s an important part of the appeal for me.

But simply having a lot of data doesn’t, in and of itself, equal success. We always need to ask, what can this data help us do? How can we better utilize data to help achieve our mission?

That mission, for HHS, is to enhance and protect the health and well-being of every American—and I suspect many of you share that mission. We must always keep the mission in mind.

Today, I want to give you a sense of how new levels of data and new technology fit into one key priority Secretary Azar has selected as part of HHS’s overall mission. That priority is transforming our healthcare system from one that pays for sickness and procedures to one that pays for health and outcomes, a transition often known as going from volume to value.

We’ve laid out four particular areas for driving this transformation, 4 Ps: patients who are in control as consumers, physicians who work as accountable navigators of the health system, payments based on outcomes, and prevention of disease before it occurs or progresses.


I’ll start with patients, because they are, or ought to be, at the center of our healthcare system and all our thinking around improving healthcare.

To the greatest extent possible, we want to put patients in control of their own health, and we want to be learning from their experiences to develop new, patient-centered approaches to treating disease.

Even though all of us work in healthcare in some form or another, many of you likely know how disempowered our healthcare system can make American patients feel. I’m sure almost all of you have experienced times when you were the patient, or a loved one was, and you felt like you were at the mercy of a healthcare system that viewed you more like a number, than a real human being who is capable of making informed decisions for yourself.

One key piece of addressing this challenge is making sure you have access to all the relevant information about your health. Earlier this year, the Trump administration took a major step in this direction with the release of two draft rules, from CMS and the Office of the National Coordinator for Health IT, aimed at ensuring patients and providers have access to interoperable health information. Our proposals are centered on one goal: getting patients access to their records, period. As you can tell from the relative simplicity of that statement, if not from the actual length of the draft rules we’ve put out, we do want to make these regulations as simple as possible.
We want to dictate the what, not the how, in health IT. We aren’t going to micromanage exactly how providers, payers, and innovators make health IT interoperable and patient-accessible—we’re just going to say it has to happen, and let private actors determine the best way to do it.

We believe the potential here is huge, not just to put patients in better control of their own health and their own healthcare, but also for private innovators to create new tools that empower patients in doing so.

Interoperability of health data is complemented by work we’ve done across HHS to improve the availability and usability of the huge amounts of data the department has. Under President Trump, CMS has taken historic strides toward making Medicare and Medicaid data more accessible to researchers and entrepreneurs. Specifically, I know many of you in this room, strange as it may seem to many outside this room, salivated for years over the idea of CMS releasing its Medicare Advantage claims data, and we were pleased to start making that data available last year.

Better coordination of data efforts is also key to the initiative that President Trump announced in his State of the Union address around pediatric cancer, which will involve a strategic effort to gather more data on pediatric cancer, an under-researched area, and break down silos between existing data sources.

Data availability is one of the major areas of discussion for an initiative my office has undertaken, called the Deputy Secretary’s Innovation and Investment Summit. The goal of the summit is to examine the innovation and investment landscape within the healthcare sector, emerging opportunities, and the government’s role in facilitating more investment and accelerated innovation, and we were struck by how much discussion there was among investors and innovators around the importance of data.

One particular area where we’ve worked with the private sector is what, in my office, we’ve dubbed PETs. No, not your cat or your dog, though those can be great for your health too—really, science shows they can alleviate stress.

But here, we’re talking about pets as in PETs, which stands for “patient empowering technologies.”

As Deputy Secretary, I’ve launched an effort to understand how we can use these technologies to empower patients in improving their own health and, especially important, avoiding costly, inconvenient trips to the doctor’s office or hospital.

Really, from a value-based care perspective, it would have been even better to have a cute acronym for “technologies that keep you out of the doctor’s office,” but turning TTKYOOTDO into a usable title that was beyond even our truly elite acronym makers—I mean, our TEAM—at HHS.

One place we’re examining PETS deals with an often-neglected, under-treated challenge: serious mental illness. About 100,000 American adolescents and young adults experience a first episode of psychosis each year, and we know that an early, comprehensive intervention in individuals experiencing their first psychotic episode can significantly increase their quality of life.

We have an intervention that works: Coordinated Specialty Care, or CSC, which is an evidence-based, multi-component team intervention that has been shown to improve outcomes, shorten inpatient stays, and reduce costs among young people with psychotic symptoms. Currently, almost every state has at least one CSC program, with a total of more than 260 nationwide. But people in need of this intervention may live some distance from the CSC program, which means there may be an opportunity to use technology to expand the reach of these programs.

So, I’m pleased to announce to all of you for the first time today, the National Institutes of Health and the Substance Abuse and Mental Health Services Administration will begin to explore technology-assisted implementation of these programs, including input from providers and patients on how PETS can be integrated into the existing care pathway.

I think that example goes to show that patient-centered technology is far from just a novelty or a fad—it can offer real opportunities to improve care for patients, including for illnesses that, like serious mental illness, have been stubborn challenges for too long.


We’re also exploring how technology can drive progress on our second P: providers as accountable navigators of our health system.

We want to help determine, for instance, what are the best ways to keep patients digitally connected to their providers. Is it a text message from their doctor? A phone call? An e-mail? A wearable that sends information directly? Data can help give us the answer to these questions—data that some of you may be gathering already in your work.

Technology can also help physicians work better with their patients to identify the lowest cost, most appropriate treatments. As one example, earlier this year, we proposed that every Medicare Part D plan make available a real-time pharmacy benefit tool. These tools, already used in the commercial market, can provide instant electronic access to a wealth of information for both patients and physicians.

We’ve heard from physicians that, when they’re prescribing a drug, they often are blindly picking a drug without knowing how much it will cost their patient, or whether it’s even covered by insurance.

With a real-time pharmacy benefit tool, the provider and patient can sit there and instantly find out which drugs are covered by insurance, how much they cost, how much they’ll cost the patient, and more. With these tools, the doctor can not only find out what kind of authorizations might be necessary for a particular drug—often, they can get started on the authorization right then and there.


The real-time pharmacy benefit tools are just one piece of efforts we’ve undertaken to make payments in healthcare, our third P, more transparent and more empowering for patients and providers. We know we need to take steps on our side, at HHS, to ensure that we’re paying for the right treatment, in the right setting, at the right time—and technology can help make that happen. In 2018, for instance, we created two new ways for Medicare to pay providers specifically for forms of “virtual care,” delivered remotely.
Providers can now be reimbursed for remote patient monitoring visits and for assessments of electronically transmitted images. Previously, Medicare could not pay for a physician’s phone or video check-in with a patient separately from an in-person visit.
I think we would all agree, if we’re trying to get the best out of modern communications technology, only paying for remote interactions that involve someone having to go into the doctor’s office isn’t going to do the trick. So we’ve started paying separately for physicians to consult with their patients remotely, using technology, without the patient being in a doctor’s office or other health facility. In many circumstances, patients can just now check in with their doctors from home.
One of the popular points of discussion among participants in our innovation and investment summit that I mentioned was just how crucial CMS reimbursement and FDA approval policies are for driving investment in the healthcare space. Now, this isn’t exactly earth-shattering news: Investors in healthcare want to be approved by the world’s gold standard drug safety agency, and they want to get reimbursed by the single biggest healthcare payer on the planet.

But still, it was a reminder about just how much our decisions at HHS drive where private markets go—and we want to ensure we’re driving innovators toward the development of newer technologies that can deliver us better, lower cost care.


One area where we believe we are making important improvements is how FDA looks at medical devices, which brings me to our fourth P, preventing disease. The use of more sophisticated software in and as medical devices offers tremendous promise to monitor patients’ health, diagnose, inform treatment, and improve outcomes, but it also poses new challenges for FDA’s work.
Over the last two years, FDA Commissioner Scott Gottlieb has laid out ambitious plans for modernizing how the agency looks at medical devices, including those that are digital health technologies, and FDA has been hard at work on implementing a huge array of changes.
For example, we’re looking at how to modernize the traditional 510(k) pathway for medical devices—an approval pathway that will turn 43 this year, which I think among Datapalooza attendees is considered practically geriatric.
One step in this effort is establishing an alternative 510(k) pathway to allow manufacturers of certain well-understood device types to rely on objective safety and performance criteria, rather than the typical approval pathway of comparing their product technologically to an already marketed device, known as a predicate. We’ve also taken major steps toward more incorporation of real-world evidence into safety reviews of medical devices, leveraging the wealth of data collected as a part of routine clinical care and from devices themselves.
FDA’s staff has laid down an incredibly strong foundation for this work under Commissioner Gottlieb, and the commitment to modernizing our approach to medical devices, including digital health, will continue long after he leaves FDA.
Really, modernizing our work around devices is just the beginning. To unlock the next generation of medical advances, we have to look at how to monitor the safety and effectiveness of software in or as devices that don’t rely on a static algorithm, but instead leverage artificial intelligence to constantly adapt and learn. Dealing with technology that’s constantly changing itself, like AI can, is a challenge for the traditional regulatory framework Congress has created. But that’s not holding us back. In January, the FDA took new steps toward developing a pilot program to test new approaches to reviewing digital health devices, including those that use AI.
The potential for better treatment of costly chronic conditions here is huge: Imagine how smart technology can help patients with diabetes, or heart conditions, or so many other challenges.

We are paying special attention to the potential for new technologies to treat or prevent one of our country’s most costly health conditions, kidney disease.
We believe there has been too little innovation and investment in the kidney care space for too long, and we’re well aware that HHS payment policies may bear a share of the blame here.
Essentially, we begin paying for kidney patients once they’re already quite sick—spending $34 billion on patients with end-stage renal disease alone in 2016. The problem is, this $34 billion in spending pays not for outcomes, but for services. We then negotiate hard over those services, and we carefully specify what we pay for. This sounds logical, and it is, but it also means that we may be leaving so little margin and being so specific on services that we leave little incentive for providers to innovate in treatment. So they don’t.
If we reorient our incentives to pay for preventing kidney disease, and for providing patients with what may be more convenient options, like at-home or peritoneal dialysis, we believe we’ll not just see better care—we’ll also see technological advances too.
HHS resources can also spark new innovation in this area, which is why last year, HHS’s Office of the Chief Technology Officer launched KidneyX, a public-private partnership with the American Society of Nephrology, to spur development through prize competitions of transformational products like wearable or implantable artificial kidneys.
In our first competition, “Redesign Dialysis,” we’re encouraging and supporting innovators to develop new technologies and approaches that could usher in the next generation of dialysis products. We received 165 submissions for this round, including a number of proposals that could help advance an artificial kidney. Submissions came not only from the kidney community but from engineers, medical device companies, and many others.
We’re thrilled with this level of interest, and it shows what a prize competition can drive in an otherwise neglected investment space.
You’ll be hearing more about this topic tomorrow from Adam Boehler, the Secretary’s senior advisor for value based transformation and the director of CMMI, and Ed Simcox, our chief technology officer. On many of the other areas I’ve discussed today, we’re pleased that everyone at Datapalooza will get to hear from many other HHS leaders, including Gopal Khanna from AHRQ, Don Rucker from ONC, Amy Abernethy from FDA, and more.
So, what I’ve laid out for you today are just some of the exciting areas where technology is helping to put patients in control, help providers work as navigators of the health system, pay for the right treatments and the right outcomes, and help prevent or cure disease.
These efforts to move to a system that pays for health and outcomes rather than sickness and procedures will mean better care for Americans at a lower cost.

But, when it comes to a lofty goal like lowering the cost of care, we’re often told that there is no silver bullet. I’ve been to my share of healthcare conferences, and “there’s no panacea” becomes one of those phrases that starts to just roll off the tongue.

And it’s true: there are no easy answers in healthcare. Any ambitious goal like moving to value-based care is going to require a comprehensive approach—that’s why we’re focused on patients, providers, payments, and prevention.

But in each of these areas, we do have a special tool on our side: the incredible advances we’re seeing in data and technology.

New technology and better use of data will make the transition to a value-based, more affordable, higher quality healthcare system a lot more feasible.

But we’re not going to continue these advances without the entrepreneurial spirit and creativity of everyone in this room.

That means all of you have a vital role to play in moving toward a healthcare system that provides better care at a lower cost.

We, and American patients, need you to play a key role in thinking about how we can move to a value-based healthcare system.

In return, at every step of the way, HHS will take every possible measure to empower you with the data you need, and unleash the technology you are developing to help the patients we all want to serve.

Together, I believe we can deliver remarkable advances in terms of both quality and cost of American healthcare in the years to come.

Thank you so much for your time today, and I wish you a great rest of Datapalooza.

Senate Health Committee Introducing New Healthcare Funding Bill

January 18, 2019

Bill provides five years of funding for Community Health Centers, Teaching Health Centers, National Health Service Corps, and Special Diabetes Programs which help keep care within reach for millions

Senate health committee Chairman Lamar Alexander (R-Tenn.) and Ranking Member Patty Murray (D-Wash.) today introduced legislation that will extend for five years federal funding for community health centers, and four other federal health programs, that are set to expire at the end of the fiscal year.

“This legislation is the first step in ensuring millions of Americans can continue to have access to quality health care they can afford close to their homes,” Alexander said. “There are 1,400 community health centers that provide health care services at about 12,000 sites to approximately 27 million Americans, including to 400,000 Tennesseans in 2016. Many of these centers serve patients in rural areas who otherwise would have to travel far distances to access health care. This legislation will also extend funding for health care workforce programs that community health centers rely on.”

“These programs help recruit and retain health professionals to serve in underserved areas and across the country, support research and services to manage diabetes, and make sure families can get the care they need, in their communities, close to home. I’ve heard from families throughout Washington state about how grateful they are to have a community health center close at hand with providers who know and understand their families and communities, so I’m glad we are introducing bipartisan legislation that gives these programs the stability they need to continue recruiting providers and serving their communities for years to come,” Murray said.

This legislation would provide five years of mandatory funding for the:

  • Community Health Center program;
  • National Health Service Corps;
  • Teaching Health Center Graduate Medical Education program;
  • Special Diabetes Program at the National Institutes of Health (NIH); and
  • Special Diabetes Program for Indians.
  • Mandatory funding for the programs is set to expire after September 30, 2019.

The Committee will hold a hearing on this legislation to hear from program experts on January 29, 2019.

Congress Questions ONC on Implementing 21st Century Cures Act

January 4, 2019

On Tuesday, December 11, the House Energy & Commerce Committee’s Subcommittee on Health held a hearing capping off its oversight of 21st Cures implementation for the outgoing 115th Congress by focusing on the Office of the National Coordinator for Health Information Technology (ONC). The witness was Donald Rucker MD, the National Coordinator for Health IT.

Chairman Greg Walden (R-OR) said in his opening statement: \”The fundamental value proposition of Electronic Health Record systems is the continuity of evidence-based care, however, patient health data continue to be fragmented and difficult to access for health care providers and patients themselves. The functionality of EHR systems lags behind the technological capabilities presently available, and until we close that gap I do not see how we can truly recognize the potential of clinical registries, payment reform, or health information exchanges.\”

Committee members had the opportunity to learn more about health information technology policies (HIT) and the work ONC has done in implementing Cures HIT provisions. Member questioning largely focused on health data privacy and security, physician burden, and patient access to their health data. Dr. Rucker was unable to provide members with any specifics on the information blocking rule currently under review at the Office of Management and Budget. Now with the partial government shutdown still under way at the time of this writing, I don\’t expect it will see the light of day in the near future.

Dr. Rucker\’s testimony and some Q&A is below:

Interoperability and Health Information Technology

December 11, 2018

I was very honored to appear on John Gilroy\’s TechTalk show on Federal News Radio. The interview ran the gamut from artificial intelligence and blockchain, to health IT certification and interoperability.

A Decade of Progress on Interoperability

December 4, 2018

As we approach the 2020’s it is helpful to discover how we got here

I have been looking back at all of the work accomplished on health data exchange as well as some of the challenges which still remain. In 2008 most of our healthcare system was still paper-based. Less than 10% of hospitals had implemented even a basic electronic health record system (EHR).


As we can see from the data above, provided by the Office of the Nation Coordinator (ONC), a great deal of progress had occurred over the next seven years. Of course, much of this was due to the federal incentives for EHR adoption incorporated in the Health Information Technology for Economic and Clinical Health (HITECH) Act. The HITECH Act was enacted as part of the American Recovery and Reinvestment Act of 2009, and signed into law on February 17, 2009, to promote the adoption and meaningful use of health information technology. HITECH provides financial incentives to “eligible professionals” and hospitals for the meaningful use of certified qualified electronic health records (EHRs). An eligible professional is generally a physician, although in the Medicaid program some mid-level providers were also included. CMS had major responsibility for the incentives (and penalties) hospitals and clinicians would receive (broken into two separate programs – Medicare and Medicaid). With well over $30 Billion dollars in payments at stake it is no wonder that we saw a sharp uptake in EHR adoption.

The ONC is responsible for the certification program for health information technology, which is required to be eligible for a payment or to avoid a penalty. As part of HITECH the ONC also oversaw the $564 million State Health Information Exchange (HIE) Cooperative Agreement Program. In total, 56 states, eligible territories, and qualified State Designated Entities (SDE) received awards. This program was a big push towards interoperability and led to a rapid growth in the HIE market as well. As the work began to transition physicians and hospitals from paper-based to electronic systems it was critical for these systems to interoperate, allowing clinical data to flow between health care organizations.

Julia Adler-Milstein, from the Department of Health Management & Policy at the University of Michigan, along with David Bates and Ashish K. Jah conducted a study published in Health Affairs in 2013. The results showed progress as the number of operational HIE organizations identified rose from 55 in 2009 to 119 in 2012. There were still some concerns however. Some technical challenges remained but primarily the issue was around a business model – as the authors stated:

\”Long-term financial sustainability for organizations facilitating health information exchange appears to be the most pressing challenge. The fact that three-quarters of efforts cite developing a sustainable business model as a major barrier is a warning to policy makers that the growth in health information exchange will likely falter unless these efforts become self-sustaining or there is a long-term public commitment to their financing.\”

With no long-term commitment to public financing, and not strong business model for sustainability on the horizon, many of these efforts began to falter at the end of the grant period. A 2016 study produced by NORC under contract with the ONC that only a small number of states were successful with significantly developing and implementing sustainable HIE systems. At the time of the study seven of the grantees were no longer in business and even fewer are in operation today. With the continued growth in digitization of health records in 2018 more than 95 percent of hospitals and nearly 90 percent of office-based physicians have implemented an EHR. So we started to see a picture where we had traded paper silos of the past to the largely digital silos leading into the present.

To address many of the concerns on building out a national infrastructure for health information exchange in 2012 the ONC announced its plan for enforcing Conditions of Trusted Exchange (CTE) and Network Validated Entities (NVE). This approach was quickly discarded (although components are revived under the current ONC plans we will discuss later). During this time a very successful open source effort overseen by the ONC called the Direct Project began. Launched as a part of what was then known as the Nationwide Health Information Network (NHIN), the Direct Project was created to specify a simple, secure, scalable, standards-based way for participants to send authenticated, encrypted health information directly to known, trusted recipients over the Internet. The Direct Project had more than 200 participants from over 60 different organizations, of which I was honored to be a participant.

After developing the standards and specifications for Direct a series of pilots were initiated. One of which was the HIE in Oregon which I had founded in 2010, Gorge Health Connect using a HRSA Planning Grant. Using our Medicity iNexx software we were able to quickly set up a Health Information Service Provider (HISP) in order to enable Direct secure messaging between provider organizations. Here is a demo of the pilot:

Very soon after the Direct Project initiatives started to scale across the country a serious issue came to light – we need a security and trust framework that would allow participants to have some levels of assurance around identity and strong security. Thus was born DirectTrust, a non-profit organization created to solve for these issues and I was happy to serve as a founding member of the Board of Directors. In March 2013, DirectTrust was awarded an ONC Cooperative Agreement to further work in accreditation, trust anchor distribution services, and governance of the DirectTrust community. The Cooperative Agreement was renewed for another year in 2014. Part of the Exemplar Health Information Exchange Governance Program, the grant was to “increase interoperability, decrease cost and complexity, and facilitate trust among participants using Direct for health information exchange of personal health information for health care improvements.”

Direct secure messaging was soon incorporated in standards and certification criteria by the ONC for use in the EHR Incentive Program. And the ease of integrating into a clinicians workflow made this a primary protocol for transitions of care. Those of us working on these efforts believed this could truly be a replacement for the fax machine in healthcare (and I still believe this today). As David Kibbe, MD, chief executive officer of DirectTrust said in 2015:

\”In terms of new technology adoption, it\’s been pretty fast, If you look at the growth of Direct over the past two years – and it\’s only been three since it was available as a standard – it\’s pretty astounding. We\’re now up to 40,000 healthcare organizations that are contracted for Direct exchange by one of the HISPs in Direct Trust\’s network.\”

This year the DirectTrust network saw 47.8 million health data exchange transactions in the first quarter – a 90 percent increase from the same time period in 2017. So we now have a way to push structured documents to known participants for clinical care. But push is only half the story – what about querying for records?

Remember the NHIN mentioned earlier – this was a cooperative established in 2004 under the ONC to improve the quality and efficiency of healthcare by establishing a mechanism for nationwide health information exchange. The group included federal agencies, local, regional and state-level Health Information Exchange organizations and private companies. In 2012 the ONC transitioned the NHIN exchange to the the eHealth Exchange. The participants who implemented the standards and services and executed the Data Use and Reciprocal Support Agreement (DURSA) legal agreement were now in the eHealth Exchange. Overseeing the eHealth Exchange and defined in the DURSA is the Exchange Coordinating Committee. Shortly thereafter the Coordinating Committee designated Healtheway, a new nonprofit organization, to assume operational support of eHealth Exchange effective October 1st, 2012. The ONC said that the transition to a public/private partnership reflected their strategy to be an incubator for innovation and a focus on supporting a sustainable ecosystem of organizations that have found secure and scalable ways to exchange health information.

The eHealth Exchange has grown tremendously over the past decade. It is the largest and most successful health information exchange network in the country. The list of participants continues to grow and includes the Department of Defense, the Veterans Health Administration, and the Social Security Administration. In 2015 Healtheway rebranded itself as The Sequoia Project. Another important initiative overseen by The Sequoia Project is Carequality. Carequality was formed with an ambitious goal: to tie together the many valuable health information exchange activities occurring throughout the country, and solve the final mile of interoperability between them.

Another important effort began in 2013 named the CommonWell Health Alliance which went live in 2014. CommonWell, as it\’s commonly known, is a nonprofit trade association, working to make interoperability an inherent part of health IT. Compose initially of some of the major EHR vendors it has grown in scope and importance over the last four years. CommonWell was inspired by former National Coordinator at the ONC Farzad Mostashari, MD, during a 2012 Bipartisan Policy Committee meeting where he challenged the assembled health IT leaders to come up with a market-driven solution to the patient identity problem since the government was unable to address the problem for them. Arien Malec and Dr. David McCallie who were serving on the Federal Advisory Committee to the ONC took up the call and CommonWell was eventually born.

Now things have changed considerably, with the passage of MACRA and the movement towards value-based care and payment models, and more recently the 21st Century Cures Act, which includes a number of interoperability provisions (including the TEFCA which I have written about here) there is a big policy push to improving interoperability. And the private market continues to innovate and technological solutions are flourishing. Many of the standards and protocols for exchanging clinical information are developed by Health Level 7 or HL7. HL7 is a not-for-profit, ANSI-accredited standards developing organization dedicated to providing a comprehensive framework and related standards for the exchange, integration, sharing, and retrieval of electronic health information. And while messaging and structured documents are very important, healthcare has been slow to adopt modern web-based technologies used in other industries.

Then along came Graham Grieve and other thought leaders with the development of HL7 FHIR (for those wondering what that stands for it is Fast Healthcare Interoperability Resources). FHIR is a standard describing data formats and elements (known as resources) and an application programming interface (API) for exchanging electronic health records. In 2011 Graham posted that HL7 needed a fresh look. The the work began with a small team at HL7 developing the standard and after five years it has finally gained a great deal of traction. In fact Apple has partnered with a number of health systems to allow patients to access their health information right on their iPhone using the FHIR standards.

FHIR is designed specifically for the web and provides resources and foundations based on XML, JSON, HTTP, Atom and OAuth structures. Developers don\’t need a great deal of healthcare experience to quickly being coding since these are the same standards commonly used across the Internet. And with the federal government strongly promoting the use of open application programming interfaces (APIs), FHIR is positioned to meet the needs of the healthcare industry and help take us into the future of interoperability.

And the other private sector initiatives are not standing still. The Sequoia Project (where I serve on the Board of Directors) recently underwent a significant reorganization to position itself for the future. CommonWell has become a Carequality Implementer and the eHealth Exchange has become a member of Carequality and is in the process of becoming an implementer. \”By reorganizing the eHealth Exchange and Carequality into separate legal entities, we further ensure unbiased, equitable treatment for the eHealth Exchange alongside every other implementer subject to Carequality oversight,\” said Dave Cassel who heads up Carequality. So with the eHealth Exchange and CommonWell now part of Carequality, and FHIR burning across the healthcare landscape it seems despite the challenges ahead this past decade has shown significant progress in interoperability.

This post also appeared in Health Data Management Magazine

Holding law-enforcement accountable for electronic surveillance

August 24, 2018
MIT CSAIL\’s cryptographic system encourages transparency w/public log of data requests
When the FBI filed a court order in 2016 commanding Apple to unlock the San Bernandino shooter’s iPhone, the news made headlines across the globe. 

Meanwhile, every day there are thousands of court orders asking tech companies to turn over people’s private data. These requests often require some secrecy: companies usually aren’t allowed to inform individual users that they’re being investigated, and the court orders themselves are also temporarily hidden from the public. 
In many cases, though, charges never actually materialize, and the sealed orders inevitably end up forgotten by the courts that issue them. As a result, thousands of innocent people are unlikely to ever know that they were the targets of surveillance.

To address this issue, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have proposed a cryptographic system to improve the accountability of government surveillance while still maintaining enough confidentiality for police to do their jobs.

“While certain information may need to stay secret for an investigation to be done properly, some details have to be revealed for accountability to even be possible,” says CSAIL graduate student Jonathan Frankle, one of the lead authors of a new paper about the system, which they’ve dubbed “AUDIT” (\”Accountability of Unreleased Data for Improved Transparency\”). “This work is about using modern cryptography to develop creative ways to balance these conflicting issues.”

image courtesy MIT CSAIL

AUDIT is designed around a public ledger where government officials share information about data requests. When a judge issues a secret court order or a law enforcement agency secretly requests data from a company, they have to make an iron-clad promise to make the data request public later in the form of what’s known as a “cryptographic commitment.” If the courts ultimately decide to release the data, the public can rest assured that the correct documents were released in full. If the courts decide not to, then that refusal itself will be made known.

AUDIT can also be used to demonstrate that actions by law-enforcement agencies are consistent with what a court order actually allows. For example, if a court order leads to the FBI going to Amazon to get records about a specific customer, AUDIT can prove that the FBI’s request is above board using a cryptographic method called “zero-knowledge proofs.” These proofs counterintuitively make it possible to prove that surveillance is being conducted properly without revealing any specific information about the surveillance.

As a further effort to improve accountability, statistical information from the data can also be aggregated so that that the extent of surveillance can be studied at a larger scale. This enables the public to ask all sorts of tough questions about how their data is being shared. What kinds of cases are most likely to prompt court orders? How many judges issued more than 100 orders in the past year, or more than 10 requests to Facebook this month?

Frankle says the team’s goal is to establish a set of reliable, court-issued transparency reports, rather than rely on companies themselves voluntarily pulling together reports that might be inconsistent or selective in the information they disclose.

Importantly, the team developed its aggregation system using an approach called multi-party computation (MPC), which allows courts to disclose the relevant information without actually revealing their internal workings or data to one another. The current state-of-the-art MPC would normally be too slow to run across the entire court system, so the team took advantage of the court system’s natural hierarchy of lower and higher courts to design a particular variant of MPC that would scale efficiently for the federal judiciary.

According to Frankle, AUDIT could be applied to any process in which data must be both kept secret but also subject to public scrutiny. For example, clinical trials of new drugs often involve private information, but also require enough transparency to assure regulators and the public that proper testing protocols are being observed.

“It’s completely reasonable for government officials to want some level of secrecy, so that they can perform their duties without fear of interference from those who are under investigation,” Frankle says. “But that secrecy can’t be permanent. People have a right to know if their personal data has been accessed, and at a higher level, we as a public have the right to know how much surveillance is going on.”

Next the team plans to explore what could be done to AUDIT so that it can handle even more complex data requests – specifically, by looking at tweaking the design via software engineering. They also are exploring the possibility of partnering with specific federal judges to develop a prototype for real-world use.

“My hope is that, once this proof of concept becomes reality, court administrators will embrace the possibility of enhancing public oversight while preserving necessary secrecy,” says Stephen William Smith, a federal magistrate judge who has written extensively about government accountability. “Lessons learned here will undoubtedly smooth the way towards greater accountability for a broader class of secret information processes, which are a hallmark of our digital age.”

Frankle co-wrote the paper with MIT professor Shafi Goldwasser, CSAIL PhD graduate Sunoo Park, undergraduate Daniel Shaar, and a second senior author, MIT principal research scientist Daniel J. Weitzner. 
The paper will be presented at the USENIX Security conference in Baltimore August 15-17. The research was supported by the MIT Internet Policy Research Initiative, the National Science Foundation, the Defense Advanced Research Projects Agency and the Simons Foundation.

Controlling robots with brainwaves and hand gestures

June 21, 2018

System enables people to correct robot mistakes on multi-choice problems

Getting robots to do things isn’t easy: usually scientists have to either explicitly program them or get them to understand how humans communicate via language.

But what if we could control robots more intuitively, using just hand gestures and brainwaves?

A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.

Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.

By monitoring brain activity, the system can detect in real time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.

The system allows a human supervisor to correct mistakes using gestures and brainwaves –
credit Joseph DelPreto, MIT CSAIL

The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.

“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we\’ve been able to do before using only EEG feedback,” says CSAIL director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

PhD candidate Joseph DelPreto was lead author on a paper about the project alongside Rus, former CSAIL postdoctoral associate Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University professor Frank H. Guenther. The paper will be presented at the Robotics: Science and Systems (RSS) conference taking place in Pittsburgh next week.

Intuitive human-robot interaction

In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.

Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.Meanwhile, Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.
“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”

For the project the team used “Baxter”, a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.

To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.

Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.
“By looking at both muscle and brain signals, we can start to pick up on a person\’s natural gestures along with their snap decisions about whether something is going wrong,” says DelPreto. “This helps make communicating with a robot more like communicating with another person.”

The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.
“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”

Materials provided by MIT CSAIL, 32 Vassar Street, Cambridge, MA 02139, USA 

Artificial Intelligence and the Future of Industry

October 29, 2017

AI and Industry 4.0

The entire world is feeling the impact of established and emerging artificial intelligence techniques tools. This is transforming society and all areas of business including healthcare and biomedicine, retail and finance, transportation and auto, and all verticals. Marketing and communications is certainly being transformed through artificial intelligence techniques. Manufacturing is at the beginning of a major upheaval as automation and machine learning rewrite the rules of work. We are seeing applications in construction and additive manufacturing, as well as self-driving vehicles and industrial robotics. Robotic systems, the Internet of things, cloud computing and cognitive computing collectively make up what is termed \”Industry 4.0.\” The first three stage were mechanization, mass production and basic automation.

The digital transformation of manufacturing and the supply chain means that data from factories is directly analyzed using AI technologies. EmTech Digital (short for emerging technology) produced by the Massachusetts Institute of Technology\’s Technology Review magazine, is an annual conference that examines the latest research on artificial-intelligence techniques, including deep learning, predictive modeling, reasoning and planning, and speech and pattern recognition. The 2016 event was especially interesting. To learn more about future events, go to:

EmTech Digital 2016 in San Francisco 

Artificial intelligence is already impacting every industry, powering search, social media, and smartphones and tracking personal health and finances. What’s ahead promises to be the greatest computing breakthrough of all time, yet it’s difficult to discern facts from hype. That is exactly what EmTech Digital tries to accomplish.

At the 2016 event a roundtable discussion on the State of AI was held with a panel of experts including:

  • Peter Norvig of Google
  • Andrew Ng (formerly) of Baidu
  • Oren Etzioni of the Allen Institute

The panel was moderated by MIT Technology Review editor in chief Jason Pontin.

State-of-the-Art AI: Building Tomorrow’s Intelligent Systems

Peter Norvig, Director of Research for Google, talks about developing state-of-the-art AI solutions for building tomorrow\’s intelligent systems.

Deep Learning in Practice: Speech Recognition and Beyond

Andrew Ng, formerly Chief Scientist with Baidu who in 2011 founded and led the Google Brain project, which built the largest deep-learning neural network systems at the time, discusses deploying deep learning solutions in practice with conversational AI and beyond.

AI for the Common Good

Oren Etzioni, CEO of the Allen Institute for AI, shares his vision for deploying AI technologies for the common good.

Videos courtesy of MIT Technology Review

#CornerCameras: An AI for your blind-spot

October 9, 2017

Compatible with smartphone cameras, MIT CSAIL system for seeing around corners
could help with self-driving cars and search-and-rescue

Earlier this year researchers at Heriot-Watt University and the University of Edinburgh recognized, there is a way to tease out information on the object even from apparently random scattered light. Their method, published in Nature Photonics, relies on laser range-finding technology, which measures the distance to an object based on the time it takes a pulse of light to travel to the object, scatter, and travel back to a detector.

And now further research has shown significant forward progress. Light lets us see the things that surround us, but what if we really could also use it to see things hidden around corners?

This may sound like science fiction, but that’s the idea behind a new algorithm out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) – and its discovery has implications for everything from emergency response to self-driving cars.

The CSAIL team’s imaging system, which can work with video from smartphone cameras, uses information about light reflections to detect objects or people in a hidden scene and measure their speed and trajectory – all in real-time. (It doesn\’t see any identifying details about individuals – just the fact that they are moving objects.)

Researchers say that the ability to see around obstructions would be useful for many tasks, from firefighters finding people in burning buildings to drivers detecting pedestrians in their blind spots.

To explain how it works, imagine that you’re walking down an L-shaped hallway and have a wall between you and some objects around the corner. Those objects reflect a small amount of light on the ground in your line of sight, creating a fuzzy shadow that is referred to as the “penumbra.”

Using video of the penumbra, the system – which the team dubbed “CornerCameras” – can stitch together a series of one-dimensional images that reveal information about the objects around the corner.

“Even though those objects aren’t actually visible to the camera, we can look at how their movements affect the penumbra to determine where they are and where they’re going,” says PhD graduate Katherine Bouman, who was lead author on a new paper about the system. “In this way, we show that walls and other obstructions with edges can be exploited as naturally-occurring ‘cameras’ that reveal the hidden scenes beyond them.”

Bouman co-wrote the paper with MIT professors Bill Freeman, Antonio Torralba, Greg Wornell and Fredo Durand, master’s student Vickie Ye and PhD student Adam Yedidia. She will present the work later this month at the International Conference on Computer Vision (ICCV) in Venice.

How it works

Most approaches for seeing around obstacles involve special lasers. Specifically, researchers shine cameras on specific points that are visible to both the observable and hidden scene, and then measure how long it takes for the light to return.

However, these so-called “time-of-flight cameras” are expensive and can easily get thrown off by ambient light, especially outdoors.

In contrast, the CSAIL team’s technique doesn’t require actively projecting light into the space, and works in a wider range of indoor and outdoor environments and with off-the-shelf consumer cameras.

From viewing video of the penumbra, CornerCameras generates one-dimensional images of the hidden scene. A single image isn’t particularly useful, since it contains a fair amount of “noisy” data. But by observing the scene over several seconds and stitching together dozens of distinct images, the system can distinguish distinct objects in motion and determine their speed and trajectory.

“The notion to even try to achieve this is innovative in and of itself, but getting it to work in practice shows both creativity and adeptness,” says professor Marc Christensen, who serves as Dean of the Lyle School of Engineering at Southern Methodist University and was not involved in the research. “This work is a significant step in the broader attempt to develop revolutionary imaging capabilities that are not limited to line-of-sight observation.”

The team was surprised to find that CornerCameras worked in a range of challenging situations, including weather conditions like rain.

“Given that the rain was literally changing the color of the ground, I figured that there was no way we’d be able to see subtle differences in light on the order of a tenth of a percent,” says Bouman. “But because the system integrates so much information across dozens of images, the effect of the raindrops averages out, and so you can see the movement of the objects even in the middle of all that activity.”

The system still has some limitations. For obvious reasons, it doesn’t work if there’s no light in the scene, and can have issues if there’s low light in the hidden scene itself. It also can get tripped up if light conditions change, like if the scene is outdoors and clouds are constantly moving across the sun. With smartphone-quality cameras the signal also gets weaker as you get farther away from the corner.

The researchers plan to address some of these challenges in future papers, and will also try to get it to work while in motion. The team will soon be testing it on a wheelchair, with the goal of eventually adapting it for cars and other vehicles.

“If a little kid darts into the street, a driver might not be able to react in time,” says Bouman. “While we’re not there yet, a technology like this could one day be used to give drivers a few seconds of warning time and help in a lot of life-or-death situations.\”

The conclusion of the paper states:

We show how to turn corners into cameras, exploiting a common, but overlooked, visual signal. The vertical edge of a corner’s wall selectively blocks light to let the ground nearby display an angular integral of light from around the corner. The resulting penumbras from people and objects are invisible to the eye – typical contrasts are 0.1% above background – but are easy to measure using consumer-grade cameras. We produce 1-D videos of activity around the corner, measured indoors, outdoors, in both sunlight and shade, from brick, tile, wood, and asphalt floors. The resulting

1-D videos reveal the number of people moving around the corner, their angular sizes and speeds, and a temporal summary of activity. Open doorways, with two vertical edges, offer stereo views inside a room, viewable even away from the doorway. Since nearly every corner now offers a 1-D view around the corner, this opens potential applications for automotive pedestrian safety, search and rescue, and public safety. This ever-present, but previously unnoticed, 0.1% signal may invite other novel camera measurement methods.

This work was supported in part by the DARPA REVEAL Program, the National Science Foundation, Shell Research and a National Defense Science & Engineering Graduate (NDSEG) fellowship.

Materials provided by MIT CSAIL