Governments are overwhelmed balancing consumer expectations, aging workforce, regulations, rapid technology change and fiscal deficits. This blog gathers a community of SMEs who discuss trends and outline how public sector organizations can leverage relevant best practices to drive their software-led transformation and build the future of technology – today!

« Legacy vs. Modern systems: 5 things that you can do better with the latter! | Main | How technology can help solve the Opioid crisis »

Why AI in Social Programs is Inevitable (and a Good Thing)

There has been some press about predictive analytics in social programs, child welfare in particular. Predictive analytics can be defined as a simple application of artificial intelligence (AI): the system is given enough data and enough rules to figure out (in the case of child welfare) what children are at risk and when an intervention should be made.

The results have been incremental but this is also a technology in its infancy. As you move from analytic to true AI and a larger data set, you can predict more accurately. The logical path is to give the AI as much data as possible and let it find the relationships. As computing power becomes less expensive and algorithms better, the access to data will improve and the results will get better over time. This benefits the citizen - both in better outcomes and more effective programs at a lower cost. And, avoiding new headlines highlighting catastrophic failures of the system benefits everyone.

AI has two aspects - preventing the bad and promoting the good. Bad are all the things that harm the people: bad outcomes, bad encounters, fraud, waste, and abuse.  Good are timely interventions, the right set of programs to enable self-sufficiency, and empowering case workers to work with people, not as data entry clerks. Today, there are many systems: systems that assess eligibility, systems that care for welfare of children, systems that look for fraud, waste, and abuse, and on and on. There are multiple threads today to prevent bad outcomes and promote positive outcomes. AI looks to unify the threads, finding relevance in a sea of data, maximizing the good, minimizing the bad, and delivering at an efficient cost. Of course there are privacy concerns, and those are real and must be addressed. But we should be thinking about them now and anticipating how to ensure privacy in an AI age.

There are concerns that computers will take over - case workers becoming obsolete or simple attendants to AI systems. In a sense, this is true.  Due to increased caseloads, caseworkers spend up to 70% of their time on case-related analysis and administrative work. Very few people go to the trouble of getting their MSW degrees to do data entry. Automated bots and AI can do this job for them. Software analyzing large data sets using AI will be able to identify interventions faster than human analysis and intuition can find today.  Caseworkers will no longer be tethered to their monitors assessing the data. Humans will be freed up to work with other humans, providing the social contact and increased interaction required to deliver benefits with dignity and compassion. And that's a good thing.


I'll publish a white paper shortly on how agencies can implement AI for their social programs. Subscribe here to receive a copy.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.