Talk is Cheap. It's Time for a 'Proof of Impact' Score!

I find the motivation to help create a more equitable and transparent world to be a significant inspiration. The energy in this community is really outstanding.

As I have dove in to the Project Catalyst, I am amazed by the amount of innovation the community has funded. We seem to be great at coming up with and supporting new ideas! It feels like we have made a habit of growing ideas from seeds.

This leads me to a question that has been on my mind for some time, especially now in the Voltaire era, when our decisions seem to make a collective difference, more than ever: How do we measure the harvest?

After all, a project gets funded, the project completes its roadmap, and then what? We get completion reports, but then there is a gap before we get timelines on the rest of the project. My underlying question is: are we tracking if the millions of ADA we are allocating from the treasury is effectively tracking the long-term real-world outcome of the projects?

It feels like there is a gap between, ‘Project Funded’ to ‘Tangible Value Created for the Ecosystem’.

In order to bridge this gap, I have been left wondering if we as a community could conceptualize a, ‘Proof of Impact’ (PoI) Score.

This wouldn’t be a pass/fail grade, but would instead provide a living metric that could help us all to better understand a project’s long-term health and contribution. It could include things such as:

​● User Adoption: How many users are actively using the dApp/service?

​● Ecosystem Integration: Is the project being used by other projects in Cardano?

​● Community Sentiment: How is the project perceived in the broader community 1 year past the funding.

​● Problem-Solution Fit: Is it actually solving the problem it was created to address?

​Having this kind of data easily available would be extremely beneficial for voters trying to figure out a new proposal, and for the DReps (Delegate Representatives) making a recommendation based on informed consent. This would take us from funding promises to rewarding proven impact.

​However, this is merely an idea from a newcomer. The wisdom is in this community and I would like to open the floor to discussion on:

  1. How do you track the success of projects you voted for in past Catalyst funds?

  2. Is a “Proof of Impact” something that would be value added? Or would it simply become more administrative burden for builders?

  3. What specific metrics would YOU include to measure the true value of a project to Cardano?

  4. For the DReps out there, how valuable would this data be for you after project funding?

Thank you for taking the time to read this, and I look forward to hearing from you!

1 Like

I have been thinking about this for a while. IMO Catalyst hasn’t really delivered much actual value, for the amount of funding it has distributed. The Horizons reporting doesn’t address impact at all, and is more from the perspective of a tech stack developer reporting on its adoption (use of Catalyst, not usefulness of Catalyst, which implies useful to whom, and how? Currently we seem to be measuring Catalyst by how much money they give away and how many people they are giving it to
but that doesn’t teach us much. We give away free money and people take it. Imagine that.). I think this same issue will come up in broader Cardano governance as well.

I have played around with various forms of what you are talking about (pretty sure I called one of them “Proof of Impact” even, :sweat_smile:), and while some projects seem to lend themselves fairly well to what you have described, there are some underlying tensions built into the landscape of Web3 innovation and general, radical transformative change that defy something as simple as saying “This project was successful”.

  • Social innovation means that testing and refinement is always ahead of us, with no robust standards to fall back on, and challenges to reach the primary intended users
  • Start-ups and Early stage programs must often have extremely fluid goals, that don’t lend themselves to “waterfall style” lock-in in milestones programs
  • Mulit-sector collaborations or policy shifts occur in complex environments: high-dimensioned, rapidly changing, and non-linear
  • Ultimately almost all of these areas of application are environments of high uncertainty, where cause-effect relationships are not yet clear (Where the “failure” of a funded project can be harvested as learning and collective intelligence, rather than punishment or ostracization of those who take the first risks)

The more I study the issue, the more I conclude that accountability and impact in these spaces looks less and less like proving success of projects, but rather improving the evaluation culture of the ecosystem, especially around the treasury as a shared resource or commons. Your urge to improve the decisionmaking of Dreps and other voters is exactly right, but I feel like it needs to start long before the vote and apply to all projects, not just those that have been previously funded.

IMO, the key areas where we can focus to move from a transactional act (money for outputs) to a transformational practice (investment in adaptive capacity and ecosystem coherence), that over time could turn a treasury into a true commons steward—accelerating innovation while building resilience and avoiding the trap of siloed, duplicative efforts:

  • Move From “Projects” to Ecosystem Stewardship: deprioritize outputs and organizational competence and explicitly evaluate ecosystem contribution: does the proposal complement, extend, or integrate with existing efforts? (how we see and map existing efforts is a dependency, but the knowledge exists as a community knowledge asset. Current methods like ComRev bury this rather than expose it)
  • Embed Continuous Adaptation: we want not just capable builders but adaptive learners and collaborators. Are proposers able to show how they will evolve? Things like learning in public, integrating with peers, and real adaptation. The ability to show network fit, not just standalone promise.
  • Value for Money = Ecosystem Learning: how does a project integrate into the shared operating system? Are they aligned with existing infra, extending where possible, collaborating and joining rather than denying and competing?
  • See Funding as a Transformative Cycle: Proposal criteria moves off the rigid, defensive posture to hypothesis-driven, ecosystem-aware, adaptive proposals. Proposal (and thus ecosystem) portfolios align to clearly show gaps and help understand where funding should flow (rather than committee-led roadmaps, which are capturable). Culture evolves to value collaboration, integration of feedback as a given, and open source contributions as norms.

I piloted a version of this in my ComRev work in fund 14; scores, and thus ranking, reflected traditional scoring, but my reviews reflected this new outlook on what makes a good proposal, and feedback was directed towards improvement in these areas. I’m linking the notes for that pilot here because there are other useful resources there, including a quick introduction to the concept of “Developmental Evaluation” which is the a key part of this approach to innovation. (The doc has multiple tabs).

Cheers, and thanks for bringing this topic up. :handshake:

2 Likes