
This year, I’m attending the 2026 Nonprofit Technology Conference virtually. After lots of learnings from Days one and two, we’re already to the last and final day of 26NTC!
These notes, and this conference time, always provides such an important space for thought and distance from my day-to-day work. If you found any of this useful, reach out and let me know. And if this seems like a place you want to be and learn in, 27NTC will be March 23-26 in Portland, Oregon. I certainly hope to be there, and maybe I’ll see you there too!
Day 1 Keynote Recap: Anil Dash
I haven’t been able to catch the keynotes, but I didn’t want to miss hearing from the excellent, thoughtful, Anil Dash. Making myself post this is how I’m going to ensure I watch and pay attention to it!
It has never been this hard for nonprofits.

Hard work with people who share your mission and values is a joy. It feels good.
He wants to talk about two key things when thinking about technology and burnout:
- Power (A sincere belief: Everybody has power!)
- Systems (Classic maxim: “The purpose of a system is what it does.” This is basically the definition of cybernetics.)
When we don’t get the output we want from a system, we think that it’s broken. But maybe it’s not and is working as it was designed.
Important note: The past few years of layoffs have disproportionately impacted people working on things like “trust and safety” and other caring parts of the sector. Makes you raise your eyebrow at these layoffs being attributed to “AI”.
So “how are we feeling about AI”? (My honest confession—see more below—I generally feel horrible about it!) Many tech users are feeling the same in response to AI being added to products without their consent.
A recent faint glimmer of hope for Anil (who spends a ton of time with tech workers): People are starting to build their own tools.
The phrase and action around “AI is inevitable” is a strategy. AI is using public goods, risks pushed off to workers, regulatory capture, and fighting open alternatives to generate profits. This strategy has been used before by social media companies to build the walled gardens closed off from the open web. (Someone from the audience at this moment yells, “RSS!” ❤️) It was used by the “gig economy” companies (Uber, Task Rabbit, etc.) and cryptocurrency companies too.
Being subjected to this pattern of behavior over-and-over creates increasing frustration.
In Anil’s experience, tech/startup CEOs are obsessed with inevitability and accept or even embrace the negative consequences (for others) that their power and decisions create. They’re also convinced they have no power.
He went from here to make a fascinating argument that showed the people at the very top of the corporate power hierarchy (private equity firms and other capital investors) complain that they are held back by regulators, activities, journalists, etc. And interestingly, those are the folks who represent workers who are at the bottom of a typical top-down org chart. So he has observered that in some ways, the power structure is a circle. Anyone can be on top if they recognize that, organize, and don’t lend their power to those they don’t want wielding it.
Examples of us fighting back with informal collective action: NFTs and the metaverse. People chose not to use these tools—and therefore, pay for them—and it tanked those industries.
A saccharine but true closing: The big AI companies have all the money in the world. We have all the heart and soul.
Legal considerations with AI: managing risk, trust, and mission
Lauren Wallace (Wallace Tech Law)
Collaborative Notes for “Legal considerations with AI: managing risk, trust, and mission”
Slides for “Legal considerations with AI: managing risk, trust, and mission”
Note: This session was at the same time as “Avoiding tech landmines for small or new non-profits” (slides). If you’re a new or small nonprofit, I would definitely check those slides out which stand on their own pretty well. They’d pair nicely with my presentation about Website Basics for Small Nonprofits!
When trust erodes, impact erodes.
AI scales risks, because it scales decisions and actions.
It’s a common myth that AI is “unregulated”, but that’s a misunderstanding. The use of AI overlaps with existing privacy, consumer protection, or lots of other laws (COPPA, FERPA, HIPAA, etc.). So our use of AI is governed by these laws just like it would be by any other technology.
“UDAP Laws” = “Unfair and Deceptive Acts and Practices” laws at both the federal and state levels. The European Union is taking a much more comprehensive approach that will impact people working in europe. The legal landscape is changing fastest at the state level right now (1200 AI-related laws were proposed in state legislatures this year).
Key themes of new AI laws
- Decisions about people seeking fairness and avoiding discrimination
- Transparency and disclosure: Making sure people say when AI is being used including interacting with AI or disclosing labeling AI-generated media
- Data and inference controls governing limits on profiling, biometric data collection, and treating AI-inferred information about people as private personal information
- Accountability and governance to document, test, and maintain oversight of AI systems
Examples of legal risks
- Video transcription tools (and recording entire meetings)
Includes both disclosure and inference impacts - Chatbots
Transparency of interacting with a bot, bias - Donor score
Data and inference controls (e.g. inferring a person’s race or income based on other information) - Resume filtering
Decisions about people. This is one of the most heavily regulated uses of AI and other technology. Multiple class action suits popping up around this issue due to bias and discrimination. - Eligibility decisions
Similar to above. Think about an AI tool determining eligibility for housing or other social services. - Cookies & tracking
Transparency and control around these technologies that track people and then process the tracking data
AI as a Risk Multiplier

- Scale: A small decision can now be scaled by AI faster
- Speed: Scaled impact that is faster than anyone can track or respond to
- Inference: Assumptions and associations made by AI
- Opacity: Ability to understand and explain a decision
- Autonomous Retraining: System changing it’s own behavior over time for opaque reasons
For nonprofits, trust is infrastructure. Legal compliance alone does not preserve trust. Mission alignment and transparency do.
Real-world scenarios and their risks
- Scenario: Education nonprofit uses AI transcription and summaries to support tutoring sessions.
Risk: FERPA exposure. Student grades and other information about minors is captured by the service. - Scenario: Nonprofit mental health clinic deploys a chatbot.
Risk: HIPAA violation. If people submit “Personal Health Information”, then storage and retention of data is governed by HIPAA
Managing these risk
So much of this comes down to managing vendors and third-party contracts and agreements:
- Data processing
- “Subprocessor chain”
- Model training
- Retention and delection policies
- Jurisdictions that apply
Standard vendor contracts protect the vendor, not the nonprofit. Negotiating a contract is liking dating before choosing a long-term committed partnership.
Try to use “enterprise contracts” which come with more data controls.
Leadership framework for AI Decisions
- What decisions will be made by these tools? (Fully understand how you’re using AI)
- Who is impacted by these tools and what are the power dynamics
- What data is requires and what is inferred. Where is the data stored and processed?
- Does this create new legal risks?
A really really important point that always comes up around technology policies: Training and involving staff is critical because they are the ones who must implement the policies (and can go off the rails, using their own tools).
Using tech ethically: Balancing efficiency with equity
Tracey Braun (Witch-Ways)
The current “AI” moment can feel like it leads to a dichotomy of ignore or embrace. There are risks to both:
- Ignore AI and risk:
- Personal and professional impact
- Impacting your organization
- Impacting the nonprofit sector
- Embrace AI and risk:
- Environmental impact
- Living in a capitalist system
- More of our income/funding going to tools
I’ve been thinking so much about AI recently, and I’d add to the list of risks:
- Deskilling: Loss of skills and “pulling up the ladder” by failing to train future generations
- Erosion of critical-thinking and loss of institutional memory
- Impact on relationships and community lost and mediated by AI
- Decrease in creativity and imagining different futures, ossification of existing inequitable systems
It’s always a good idea to make technical decisions from values and guidelines. Four to use:
- Transparency: Explain what will happen before it happens
- Accountability: We must own the impact of our actions regardless of intentions
- Sustainable Growth: Do our work with future generations in mind
- Integrity: Lead with ethics, even when it’s not the easy choice
How do we apply these to the lens of AI?
- Transparency
- Discussing AI use with clients
- Data retention guidelines
- Accountability
- Statement for ourselves
- “Shadow AI”: AI used without our knowledge (either by a single staff person or volunteer, or as part of a tool where it’s not clearly disclosed)
- Sustainable Growth
- Using AI to support, not replace, people’s work (My note: see the “reverse centaur” metaphor)
- Integrity
- Keeping data safe
- Paying for tools that matter
- Listening to other voices. Who isn’t in the conversation?
Good example: AI appears to be increasing good at translation, but what are the risks of a single mistranslation? How can you guard against this?
Data collection has been a consistent theme throughout all three days. Collecting less information—only the information we actually will use—is such an important baseline.
A very good and interesting point: Implementing AI responsibly requires lots of time and effort (the thing AI is supposed to save).
Fin.