Distributed Decision Making - List of proposals endorsed by Rodrigo

Why I Chose Distributed Decision Making as the theme of my selection list

The concept of Distributed Decision Making (DDM) refers to the collective capacity of a community or network to make informed, accountable, and transparent decisions without relying on centralized authority. Instead of concentrating power in a small group of actors, DDM distributes decision rights across multiple stakeholders β€” proposers, reviewers, dReps, SPOs, or voters β€” each contributing according to their role, expertise, and delegated trust.

In the context of Cardano governance, DDM is not just a theoretical principle; it is the practical foundation of how treasury resources and governance actions are decided. Its key characteristics include:

  • Diversity of inputs: decisions should reflect contributions from many participants, not a narrow set of actors.
  • Accountability mechanisms: participants must justify and document their reasoning, enabling public scrutiny.
  • Transparency: decision processes and rationales should be visible and auditable, preventing hidden influence or capture.
  • Efficiency in resource allocation: DDM seeks to optimize the distribution of funds so that community-driven priorities align with treasury spending and governance outcomes.

Without strong distributed decision-making practices, the ecosystem risks low-quality reviews and rationales, concentration of influence, and misallocation of resources. Strengthening DDM therefore has a direct impact on sustainability: it is how Cardano can scale decision making fairly while stewarding community resources responsibly.

I chose this theme for 2 main reasons

Reason 1 - My background on Cardano ecosystem

The first reason is decision making is where my trajectory in the ecosystem is deepest and most relevant. Over the past five years, I have been continuously engaged with Project Catalyst, serving as a proposal reviewer, review moderator, and for more than a year as a milestone reviewer, validating funded projects and gaining critical insight into accountability bottlenecks.

Personal Expertise and Background on Project Catalyst

My insights derive from direct research and end-to-end operational experience across Catalyst:

  • Active since Fund 2 (2020 β†’ present)
  • Community Reviewer (L1): reviewed 1,000+ proposals.
  • Community Moderator (L2): continuously active since Fund 3 (β‰ˆ4+ years), moderating community reviews (cumulative total in the 5,000–10,000 range over all funds);
  • I co-designed early moderation guidelines (Funds 3–5).
  • Milestone Reviewer (since Fund 11): validated 100+ Proofs of Achievement, ensuring deliverables before disbursements.
  • Active dRep (~6 months): participated in dozens of votes, consistently submitting on-chain rationales to strengthen transparency.
  • Funded Proposer / Challenge Setting Author: authored/co-authored 9 Challenge Settings approved by community vote, collectively mobilizing ~$3M in treasury funding; submitted multiple proposals, with 2 funded by the Cardano Treasury.

My background on Cardano Governance, Research & Community Projects

  • Manifesto Cardano Brasil (funded proposer): community sensing with surveys, debates, and reports on representation, governance, and CIP-1694.
  • AGORA (founder): independent research and advocacy initiative dedicated to improving the quality and decentralization of decision making in Cardano via frameworks, critical analysis, and education.
    • Agora Research Bureau: publishes open-access reports with clear rationale and methodological transparency; complements on-chain vote metadata submitted as a registered Voltaire dRep.

Reason 2 - Concerning problems on Cardano decision making landscape

The second reason is grounded in the recurring problems I have observed across both Project Catalyst and Cardano on-chain governance. Over five years of continuous participation, several bottlenecks became evident:

  • Low quality and limited utility of community reviews on Catalyst: In early funds, it was difficult to even reach 10 reviews per proposal. After Fund 10, dozens of reviews per proposal became common, but very few voters actually read them, reducing their real influence on decision outcomes.
  • Shallow voter engagement: Both survey data I collected and publicly available voting data confirm that most voters dedicate very little time to the process, casting votes on only a small number of proposals. The number of unique wallets voting per proposal remains relatively low, which weakens representativeness.
  • Weak or absent rationales: In Catalyst and CIP-1694 governance, many dReps either do not vote, abstain without justification, or provide rationales that are short and superficial. Robust, well-reasoned explanations remain uncommon, which undermines accountability and quality of decision making.
  • Inactive or inconsistent participation: Several governance actions receive little attention from dReps, leaving important decisions to a narrow and inconsistent group of participants.

Together, these factors signal a low overall quality in the decision-making process. They also represent systemic risks: concentration of influence, lack of accountability, and potential misallocation of treasury resources. Addressing these bottlenecks is essential for improving governance outcomes and ensuring the sustainability of the ecosystem.

These experiences highlighted recurring bottlenecks in the system that must be addressed to ensure treasury funds are used effectively and that the most capable teams and contributors can be nurtured. For me, decision making is the highest-priority category, because it directly shapes how resources are allocated. Improving the qualification of participants, the robustness of justifications, and the accountability of outcomes is essential to avoid waste and, ultimately, to secure the long-term sustainability of the Cardano ecosystem.

Proposal Selection Methodology

Fund 14 Categories and Proposal Distribution

At the time of this review, there were 1,699 proposals submitted for Fund 14.
(Note: this number was subject to later adjustments by the Catalyst Team due to the L2 Moderation Bounty program – see details here).

The categories and proposal counts were as follows (data from Catalyst Explorer):

  • Cardano Open: Developers

    • 329 proposals available at the time of analysis
    • β‚³3,100,000 in available funds
  • Cardano Use Cases: Partners & Products

    • 168 proposals available at the time of analysis
    • β‚³8,500,000 in available funds
  • Cardano Open: Ecosystem

    • 600 proposals available at the time of analysis
    • β‚³3,000,000 in available funds
  • Cardano Use Cases: Concepts

    • 602 proposals available at the time of analysis
    • β‚³4,000,000 in available funds

Process Overview

Pre-Screening Stage with AI

As part of the evaluation process, I adopted an initial thematic pre-screening stage using artificial intelligence (AI).
This stage aimed to identify whether a Fund 14 proposal was aligned with the defined core focus: improving distributed decision-making in funding programs, with emphasis on due diligence, accountability, and alternative funding models.


Tools and Data Sources

The pre-screening process was conducted using Catalyst Explorer, developed by LidoNation.
This tool provided the best available UI/UX at the time, enabling:

  • Filtering of proposals by tags and categories.
  • Creation of bookmark lists to organize the review workflow.

The official Catalyst application (app.projectcatalyst.io) was under maintenance and did not allow for proper navigation during this period.
The official Catalyst website (projectcatalyst.io) was functional but lacked the advanced filtering and bookmarking features needed for systematic review.


1. Use of Tags in Fund 14

Each Fund 14 proposal was required to self-assign a thematic tag.
Because the official Catalyst app was unavailable, I used ProjectCatalyst.io to cross-check the list of tags:

  • Community & Outreach
  • DeFi
  • Development & Tools
  • Education
  • Events & Marketing
  • GameFi
  • Governance
  • Identity & Security
  • Interoperability
  • NFT
  • Real World Applications
  • Smart Contracts
  • Sustainability

The Governance tag proved to be the most aligned with this methodology. However, relevant proposals were also found under other tags, while some categories (e.g., Sustainability, DeFi, NFTs, GameFi, Interoperability) showed only weak or tangential relation to decision making.


2. First Screening Cycle – Cardano Open: Ecosystem

For the Cardano Open: Ecosystem category (β‰ˆ600 proposals), I did not initially know which tags would dominate.
Therefore, I applied an AI prompt to every single proposal in this category.

This comprehensive run provided insights into which tags and proposal types were most closely related to the research objective of distributed decision making in the Cardano ecosystem.


3. Classification with AI Prompt

The AI prompt was designed with GPT-5, considering thematic screening guidance. It is optimized to indicate alignment of each proposal with the methodology and check whether the main focus is distributed decision-making in the Cardano ecosystem and to distinguish between what is core governance improvements towards decision making (Catalyst, Voltaire, dReps, accountability) from what is only adoption, marketing, or generic training.

It was written so that every time a proposal is analyzed, the evaluation follows the same criteria.

In other words, the prompt acts as a structured filter, ensuring that responses don’t become subjective or scattered β€” they always come out in the same format, with justification and category.

:backhand_index_pointing_right: It is therefore a fixed evaluation guide, making the analysis consistent, objective, and comparable across different proposals.

Proposals were classified as:

  • :white_check_mark: Aligned – when decision making and governance were the central objective.
  • :cross_mark: Not Aligned – when there was no clear connection to decision making.
  • :warning: Borderline – when decision making and governance appeared only as a secondary or indirect benefit.

Borderline cases were deliberately included in the workflow to avoid prematurely excluding proposals with potential impact.
In such cases, I conducted a deeper manual review to determine whether decision making was a central and evident objective. If it was not, the proposal was discarded.


4. Leaner Approach in Later Categories

After completing the first cycle with AI, I refined the process into a more time-efficient approach:

  • A manual quick check was performed on each proposal (title, problem statement, solution description, and assigned tag).
  • If the proposal was in clear misalignment with the decision-making focus, it was immediately discarded.
  • In edge cases, the AI prompt was applied selectively to check alignment in greater depth.

This hybrid approach saved time while ensuring that potentially relevant but less obvious proposals were still captured.


5. Edge Cases

Some categories of proposals required deeper consideration:

  • Education & Onboarding:
    Many proposals referenced Catalyst or governance as part of broader onboarding initiatives.
    However, when governance was only a non-central module (e.g., basic blockchain concepts, wallet creation, SPO delegation, introductory dev tools), these were classified as Not Aligned.
    Additionally, proposals focused only on participant numbers without quality assurance in onboarding were excluded.
    The focus of this work is not on participation volume but on qualification of the decision-making process.

  • Development Tools & Analytics:
    Some proposals offered marginal governance benefits (e.g., generic blockchain analytics, APIs, or databases).
    These were excluded unless the tools were explicitly designed for governance use cases, such as dashboards for dReps, governance-related data processing, or accountability tracking.

  • Hackathons:
    While hackathons involve internal evaluation mechanisms, these are usually restricted to small juries and not replicable at the ecosystem scale.
    For this reason, most hackathon-focused proposals were excluded, as their decision-making models did not meaningfully contribute to Cardano-wide governance.


Justification for the Pre-Screening approach

  • Efficiency: combined tag-based filtering with AI screening to manage a large pool of proposals.
  • Fairness: screening across all proposals ensured that every team received at least a minimal level of attention. Although reviewing ~1,700 proposals individually is time-consuming, this methodology balanced inclusivity with professional scrutiny, avoiding favoritism toward already established teams.
  • Transparency: inclusion/exclusion criteria were explicitly documented.
  • Consistency: edge cases were evaluated with AI support rather than left to subjective judgment alone.
  • Methodological rigor: avoided common pitfalls such as accepting basic education or mass onboarding proposals that inflate participation numbers without contributing to better governance quality.

:backhand_index_pointing_right: This methodology ensured that only proposals directly focused on decision making, governance mechanisms, or alternative funding models advanced to detailed evaluation phases, while tangential or superficial initiatives were systematically excluded.


Post-Screening Adjustment and Alignment with L2 Moderation Outcomes

After completing the initial pre-screening process, I organized the selected proposals into four curated lists within the Catalyst Explorer platform β€” one for each category of Fund14. These lists were saved as PDF files for record-keeping and transparency, and the links to these documents will be made publicly available.

With the pre-screening stage finalized, the next step was to proceed to a manual in-depth review. However, before starting this phase, the Catalyst team released a preliminary communication regarding the outcomes of the L2 Moderation Bounty Program. This announcement included a list of proposals that had been flagged and excluded during moderation.

Since some of the proposals identified in my pre-screening lists overlapped with those excluded by L2 Moderation, I undertook a manual cross-check. Each affected proposal was reviewed individually, and those confirmed as excluded were removed from my pre-screening lists.

This adjustment ensured that my subsequent manual reviews would only focus on proposals progressing to the voting stage, thereby maintaining methodological consistency and avoiding unnecessary evaluation of proposals no longer in scope.

Proposal Counts Before and After L2 Moderation

Category Pre-Screen (Before) Excluded by L2 Moderation Final Count (After)
Concepts 51 7 44
Ecosystem 42 9 33
Partners 13 4 9
Developers 12 4 8
Total 118 24 94

Scoring-Based Filtering of Proposals

Following the alignment with the L2 Moderation outcomes, I introduced an additional filtering step based on the preliminary Community Review scores recently published on Project Catalyst Review Module

For this stage, I applied a minimum threshold score of 3.5. Proposals scoring below 3.5 were discarded from the analysis. This decision was grounded on the observation that:

  • The majority of proposals achieved scores above this threshold, reflecting an acceptable level of detail and consistency.
  • Proposals with scores below 3.5 generally showed moderate to severe issues, such as missing details, inconsistencies, or lack of clarity.
  • While isolated exceptions may exist, the 3.5 threshold served as a flexible and moderate filter, removing only clear outliers rather than excluding borderline but potentially valuable submissions.

By adopting this scoring criterion, the pre-screened list was further refined to ensure that subsequent manual reviews would focus on proposals with a baseline of quality and detail, improving the efficiency and reliability of the overall evaluation process.

Impact of Score-Based Filtering (3.5 Threshold)

After applying the Community Review score filter (minimum threshold of 3.5), only one proposal was excluded across all categories.

  • The excluded proposal belonged to the Ecosystem category, with a score of 3.3.
  • No other proposals in the Concepts, Partners, or Developers categories fell below the 3.5 threshold.
Category After L2 Moderation Excluded by Score < 3.5 Final Count (After Score Filter)
Concepts 44 0 44
Ecosystem 33 1 32
Partners 9 0 9
Developers 8 0 8
Total 94 1 93

Final Filtering & Evaluation

After applying the pre-screening, moderation checks, and score threshold, the final stage of evaluation was based on a final individual alignment evaluation and three qualitative criteria: Team & Track Record, Budget Granularity, and KPIs. These were applied with different levels of strictness to balance rigor with fairness.

Final Thematic Alignment Check

Before applying the three core criteria (Team & Track Record, Budget Granularity, and KPIs), I conducted a thematic review of the remaining proposals.
The purpose of this step was to verify whether the AI pre-screening prompt used in the initial stage might have been too lenient in borderline cases.

Each proposal was checked to confirm that its focus was sufficiently aligned with the core theme of distributed decision making. When a proposal was found to be not aligned strongly enough, it was excluded at this stage.

All such exclusions and their justifications were documented in the consolidation spreadsheet, ensuring transparency and traceability of the evaluation process.

1. Team & Track Record (Excludent)

This was the most critical criterion in the manual review and functioned as a hard stop.
Proposals were automatically excluded if the proposing team failed to demonstrate sufficient credibility or delivery capacity.

The evaluation included multiple checks:

  • Relevant Experience

    • Teams were expected to show prior involvement in Distributed Decision Making, governance, or the Cardano ecosystem, or have clear expertise in the specific field of their proposal.
    • If no relevant background was described, or if the description lacked substance, the proposal was excluded.
  • Verifiable Evidence

    • Claims of past experience needed to be supported by links to previous work, repositories, publications, or other verifiable references.
    • Proposals that listed experience but did not include links β€” or provided links that did not validate the claimed expertise β€” were considered unreliable and excluded.
  • Ongoing Proposals & Milestone Compliance

    • Teams declaring ongoing Catalyst-funded proposals were checked against the Milestone Module.

    • If a team had delays of six months or more in delivering Milestone Proofs of Achievement, and no plausible justification was provided, the proposal under evaluation was excluded from the list.

    • Justifications were carefully considered: delays caused by external dependencies (e.g., reliance on infrastructure outside the team’s control) could be tolerated, but unjustified or poorly explained delays were not.

  • Rationale for Strictness

    • Catalyst funds operate on cycles of roughly four months, and approving additional funding for teams already significantly behind schedule undermines accountability.
    • By applying this filter, priority was given to teams that can deliver within an acceptable timeframe, reinforcing the principle that proposers should complete commitments before requesting new resources.

In summary, Team & Track Record was treated as a non-negotiable requirement:
if a team lacked proven experience, failed to provide verifiable references, or showed poor accountability in ongoing commitments, their proposal did not move forward in the evaluation.

2. Budget Granularity (Non-Excludent)

  • Proposals were expected to provide a minimum acceptable level of detail in their budgets, especially for personnel costs (hourly rates, time allocation, full-time/part-time indication).
  • Large lump-sum allocations with no explanation were considered insufficient.
  • However, since detailed budget practices are not yet widely adopted in the community, minor gaps were tolerated if the overall structure was coherent.

3. KPIs (Non-Excludent)

  • The presence of quantifiable KPIs with clear targets was treated as desirable but not mandatory.
  • Proposals without KPIs were not excluded outright, however, when the budget also lacked sufficient granularity, the absence of KPIs was treated as an additional weakness and used as a tie-breaker criterion for proposal exclusion.

Combined Exclusion Rule

  • While Budget and KPIs individually were not strict exclusion filters, the absence of both simultaneously the combination of a non-detailed budget together with the absence of KPIs was considered an exclusionary condition.
  • In such cases, even if the team was qualified, the lack of both budget transparency and measurable impact indicators indicated insufficient conditions for accountability and reliable evaluation, and the proposal was excluded.

This structure ensured that the filtering process remained consistent, objective, and transparent, prioritizing strong teams while also promoting better practices in budgeting and KPI definition without being overly restrictive.


Conflict of Interest Declaration

Some proposals that I personally submitted were also processed through the same evaluation process and met the eligibility criteria.
However, to avoid any potential conflict of interest, these proposals have been clearly marked and highlighted in the evaluation records.

They were not subject to the same endorsement process applied to third-party proposals.
This measure ensures that my personal involvement as a proposer does not compromise the objectivity, neutrality, or credibility of the overall evaluation.


Fund 14 – Distributed Decision Making Proposals

Below is the consolidated list of proposals related to Distributed Decision Making that I am endorsing.
Each entry includes a direct link to the Catalyst website and platform for full details.

Below is the consolidated list of proposals related to Distributed Decision Making that are authored or co-authored by me. While they went through the same evaluation and filtering process as all other submissions, I am marking them in a separate block to clearly highlight the conflict of interest. This ensures transparency while still allowing the community to evaluate their merit independently.


Attached Spreadsheet – Transparency Note

To complement the methodology described above, I am attaching a spreadsheet that documents each stage of the evaluation process.

Here’s what can be found inside:

  • Pre-Screening Stage tab β†’ raw output of the initial AI pre-screening, showing proposals marked as Aligned, Borderline, or Not Aligned.
  • Category Pre-Screening Lists β†’ four PDFs (Developers, Partners & Products, Ecosystem, Concepts), grouping proposals that passed the pre-screening stage.
  • Final Evaluation tab β†’ the consolidated list of endorsed proposals related to Distributed Decision Making (DDM), after all filters (moderation, scoring, thematic alignment, and qualitative criteria) were applied.

This attachment ensures traceability and transparency of the review process: anyone can verify how proposals moved from the broad pre-screening pool to the final endorsed list.

:link: Full documentation of this work, including methodology and evaluation records, is also available on GitHub:
Agora Research Bureau – Catalyst Representative Pilot (Fund 14)


Closing Note

This consolidated review highlights the proposals most closely aligned with the theme of Distributed Decision Making in Fund 14.
By applying a structured and transparent methodology, I aimed to ensure that only initiatives with credible teams, accountable practices, and clear relevance to decision making improvements were included.

Ultimately, strengthening distributed decision making is about improving the quality, accountability, and sustainability of governance in the Cardano ecosystem.
I invite the community to check these proposals carefully and contribute to shaping a stronger, more decentralized future for Cardano.


catalyst-rep-pilot

5 Likes

Great and detailed analysis of your classification method. Perhaps, in the future, we will need more similar efforts from veteran contributors like you to have some standard classification and decision-making methods.

I have bookmarked it and will share it in my local community soon. Thank you!

1 Like

Thank you very much!:grin: I think this type of initiative is a good way to move forward. We will have voting delegation on Catalyst very soon and establishing guidance and methods is something every representative will need to address.

1 Like

Exactly, these standardized and synchronized methods will also help proposers and reviewers.

Personally, I still prefer the solution of having at least 2 rounds to screen proposals to reduce the amount of information for voters, thereby leading to more accurate decision-making. The idea is not to point out good or bad proposals, but just to classify which is better than the other.

I also understand that the previous strategy was to increase the quantity and scalability of catalysts, but quantity and quality are 2 factors that can almost never grow simultaneously and quickly. So sometimes we need quality-oriented funds.

1 Like

I agree with you. I’d like to see an update where proposals would have to go through vetting steps before the voting stage; this would reduce the overhead of reviewing such a large volume. Unfortunately, this depends on updates decided solely by the Catalyst team.

1 Like
Proposals:
https://reviews.projectcatalyst.io/proposal/1110
https://reviews.projectcatalyst.io/proposal/596
https://reviews.projectcatalyst.io/proposal/480

Our successes:

🌐 Greetings Cardano Immortals πŸ’ƒπŸ»

πŸš€ Big Milestone from Coxygen Global
448 tertiary IT students onboarded
Across 12 countries
From 30+ tertiary institutions

πŸŽ“ Student Progress On-Chain
1200 on-chain Haskell Plutus progress tokens minted
48 students achieved 10+ tokens each
Skills and progress verifiable directly on Cardano

⏰ Global Daily Operations
Running daily across 3 time zones: East πŸŒ… | Central 🌍 | West πŸŒ‡
12 facilitators lead sessions
Group sizes: 3 β†’ 60 students per facilitator
120+ average monthly cumulative attendances

🌍 Multilingual & Inclusive
Training in English πŸ‡¬πŸ‡§ and French πŸ‡«πŸ‡·
Expanding access across diverse regions

πŸ“š Backed by Quality & Partnerships
Based on IOG open-source professional training materials
In partnership with European Business Institutes
Students receive certification in Plutus Haskell

πŸ”‘ Powered by Coxy Wallet

Cross-platform preprod wallet
Built using Helios Coxylib + Jimba
Enables any static website to interact seamlessly with Cardano

🀝 Partnerships & Collaborations
We invite the Cardano community to:
Support our Catalyst proposals
Partner with Coxygen Global
Bring your tools, materials, services, and products
Mentor our students
Delegate to our DRep
Be our incubator for future builders

πŸ—³οΈ Vote for our Proposals
Coxylib.js – Simplifying Cardano DApp Development
Cardano RWAs Hackathon – 8 industry-ready templates
Rapid Plutus Application Development (rPAD) – Pheidippides

🌍 Why This Matters
Democratizes access to blockchain education
Scales globally across institutions
Builds the next generation of Plutus developers
Proves how dApps + education + wallets accelerate adoption

✨ The Future Is On-Chain
Coxygen Global x Cardano
Training today’s students β†’ Building tomorrow’s blockchain worldaste code here
1 Like