Someone suggested I review the new IBM Center for The Business of Government report on Use of Dashboards in Government by Sukumar Ganapati, Florida International University, pointing out one irony off the bat: There aren’t a lot of examples of dashboard illustrations in this report. So I first decided to create a dashboard of this PDF report in my social knowledgebase and use it to analyze the report, and reference all of my dashboard work relating to most of the examples in this report.

The report lists the following 11 dashboards (with links to my 7 recreated dashboards added):
The report lists Four Lessons Learned as follows (with a more detailed description below):
  1. Data quality is key to the credibility
  2. Best practices resources are necessary in the design and ue of dashboards
  3. Dashboards are only tools; and
  4. Effectiveness depends on use
It also describes the Tufte principles of design, which state:
  • Dashboards should fit on a single page (or screen)
  • Dashboards should be simple
  • Dashboards should use the best display medium for communicating data effectively.
Based on my consultations with dashboard experts and my own work with most of these same dashboards providers, I can confirm the validity of these lessons and principles presented in this report and thus can recommend Ganapti’s conclusions for readers to follow.
The reports also mentions that large PDF files make some of the best material for dashboards and I agree because they often reflect comprehensive reports based on extensive subject matter expertise, peer review, and data collected with a specific purpose in mind, but are nor readily accessible. Dashboards can provide that access.

For example I used the PDF files of the US EPA Report on the Environment, the DoD Office of Inspector General Semi-Annual Report to Congress, and the US Army’s Weapon Systems 2011, to create dashboards which distilled the findings of the report.

One lesson that the report does not mention which I’ve learned about dashboards, in order to make them more useful to different viewers, is that they need to be created as part of a broader data science process, such as one I follow:
  1. Mapping a PDF file to a knowledgebase with well-defined web-addresses to all the key parts of the content
  2. Converting the knowledgebase to a spreadsheet so the data are in an interoperable/reusable format by those wanting to use different dashboard tools
  3. Make it easy to import the spreadsheet into dashboard tools with a wide variety of statistical and visualization functions for processing in-memory data in the cloud, and
  4. Use dashboard tools that support both conventional business process management and new dynamic case management (a new methodology beyond so-called agile software development that adds semantics and rules to the data, not to lines of software code).
I am currently applying these four lessons learned and Tufte principals to pilot dashboards and dynamic case management systems for the DoD and will report on that in another story. My goal is to produce dashboards that are useful to senior military officials.

The process has reminded me of the importance of people behind these principles who understand and know how to create functional dashboards–and why many dashboards fall short of the mark.

Aneesh Chopra said recently “the government needs data science and data scientists.” Well I would like to volunteer to be his first chief data scientist and improve his dashboards!

_________________________________________________________________

Here are the the more detailed summaries of the Four Lessons Learned:

Lesson One: Data Quality is Key to the Credibility of Dashboard Performance Measures
The dashboards in the case studies (especially the cross-agency ones) have faced data quality issues. This compromises dashboard performance measures and could eventually damage the dashboard’s credibility. To overcome some of the data quality issues, standardized data definitions and training of key agency personnel are required. Adopting a standard schema, such as the Extensible Business Reporting Language (XBRL) used in business applications, for federal financial dashboards such as Recovery.gov or USAspending.gov would enhance data quality and reporting efficiency.

Lesson Two: Best Practices Resources Are Necessary in the Design and Use of Dashboards
Agencies have different design approaches to dashboards. Whereas the USPTO dashboards are visually rich, the FDA-TRACK dashboards are essentially tables. The Recovery.gov and
USAspending.gov dashboards feature maps. Although design may be idiosyncratic and vary based on technical capacity within the organization, a set of best practices or standards would enhance design quality. The Usability.gov website, developed a decade ago, enhanced government websites by providing standardized guidelines. A website for standardizing dashboards or giving best practices would be equally useful. Focus group feedback would assist in enhancing the usability of the dashboards as would the creation of communities of practice within government.

Lesson Three: Performance Measures Should Reflect Organization Goals
Performance measures differ based on agency needs. Cross-agency dashboards have common measures. The essential approach should be to align performance measures to organizational goals. This increases the usability of dashboards. Responding to different audiences requires reporting different performance metrics. Indeed, performance measures in some dashboards (e.g., Recovery.gov, USPTO’s Data Visualization Center, FDA-TRACK) evolved in response to different audiences’ needs.

Lesson Four: Dashboards are Only Tools; Effectiveness Depends on Use
Dashboards are only tools to visualize performance data. Their effectiveness depends on how organizations use them to enhance internal performance and external accountability and transparency. Organizations should be cognizant of both the strengths and weaknesses of dashboards. Dashboards need to be useful to the organization’s purposes. In internal organizational management, this implies that dashboards are used in the decision-making process (e.g., the face-to-face sessions based on the Federal IT dashboard and FDA-TRACK to identify weak projects). At the external accountability level, use of dashboards means that agencies are exposing their performance metrics to public scrutiny. In this context, both the dashboard performance measures and the underlying data need to be publicly accessible for credible organizational accountability.