When our third cohort kicked off in late spring, it was clear an evolution had taken place. Our second cohort had just wrapped in March, so it had only been a matter of weeks, and yet the baseline of AI exposure among this new group had already risen. Instead of asking, “What’s a GPT?” they were asking, “Do I need a custom GPT?,” and were already experimenting with incorporating AI into their workflows.
Our own baseline had shifted, too. The Decoded Futures team came into this cohort knowing how to build fluency and increase participants’ technical confidence; our challenge this time was more nebulous. On one hand, we had to convey the ambiguity inherent in AI-problem solving, namely, there’s no one best path to take.
On the other hand, we needed the cohort to understand how to think computationally, because that’s what will lead you to identify what your problem actually is, in the first place.
Here’s what we learned after our third Decoded Futures cohort:
It had only been a matter of weeks, and yet the baseline of AI exposure among this new group had already risen.
Embrace ambiguity.
Over the last year, we’ve seen three distinct patterns emerge in relation to ease of AI learning. As with any skill, people will take to it at different speeds:
Group 1: Fully embraces the ambiguity of using AI and is excited about the challenge.
Group 2: Experiences some growing pains, but eventually digs in and finds solutions.
Group 3: Wrestles with the inherent ambiguity of AI-problem solving and consistently feels a lot of discomfort in the ambiguity. (We get it!)
Interestingly, these groups don’t map to job titles, demographics, or even prior AI usage. The group that took to AI-problem solving the fastest – Group 1 – didn’t have any special training or characteristics. What made the difference for them was comfort sitting in ambiguity. Think of learning to float: it comes much easier if you’re relaxed and lean into it.
Computational thinking is essential.
This cohort arrived raring to build; some members had already made a custom GPT and others were even generating functional software through “vibe coding.” Technologically, the results were more advanced than what we’d seen in the past. But it was also clear that having exposure to AI tools (trying out the latest wrapper models or LLM) didn’t necessarily correlate to understanding how to use those tools most effectively.
And you won’t know what tool makes the most sense for a given problem if you can’t really clearly identify what that problem is (it may not be what you originally thought!).
Enter: computational thinking. At its most basic, computational thinking means breaking down a problem systematically. In our program, we teach members to map out their existing workflows with as much detail as possible.
Let’s say you’re applying for a grant. Before you even open a document to begin drafting, you need to identify funding opportunities, review guidelines, gather data, and contact stakeholders.
Looking at each discrete step in a process reveals the places where AI might be able to streamline or support. Alternatively, you could discover that AI isn’t necessary for a particular blocker at all, and it’s human interaction – having a conversation with a coworker, let’s say – that’s required.
That distinction is important! The goal of our work isn’t to create AI evangelists, or use AI for the sake of using AI; we want to equip nonprofit leaders with a way of thinking that will empower them to work through almost any problem.
The power of structured exploration.
The feedback we received most often – and which won’t be surprising to anyone who’s ever struggled to sit down and tackle an extracurricular project, no matter how interesting – revolved around the value of these three program aspects:
Dedicated, structured, recurring time, with encouragement to explore.
Pairing with volunteer technologists from the very beginning.
In-person peer support for real-time problem solving and brainstorming.
This structured environment created momentum through shared discovery while ensuring no one got left behind.
Success with AI isn't about finding the "best" tool or following a perfect playbook. It's about developing comfort with ambiguity, thinking systematically about problems, and having support to experiment and fail.
Finally, as this cohort showcased so well at their recent Demo Day, you build and refine and iterate and fail and fail some more, and “that whole process of trying and failing, then failing better, is what we mean at Decoded Futures when we talk about “success.”
The goal of our work isn’t to create AI evangelists, or use AI for the sake of using AI; we want to equip nonprofit leaders with a way of thinking that will empower them to work through almost any problem.
What’s New?
Eng(INE) offers nonprofits free, dedicated AI engineering teams to build cutting-edge technology advancing their organization’s social and economic opportunity goals.
Applications for their next cohort are open until July 30! Check out their website to apply and learn more: nonprofitengine.org.
Bethany Crystal is running a virtual Summer AI School August 11-15. Join her to build a tool - or a business!
We’re cooking up something big with OpenAI — keep your eyes peeled for an update on July 17! Until then, you can reach us at decodedfutures@technyc.org for any questions, comments, or suggestions.
Making toast!