White Paper
Spectral: An Inference Economy for Web3
January 18, 2024
Abstract
This paper presents Spectral, an innovative platform designed to integrate artificial intelligence (AI) and machine learning (ML) with blockchain technology, specifically focusing on smart contracts in the Web3 environment. The key objective of Spectral is to facilitate a decentralized network that enables the creation, verification, and utilization of AI-generated inferences. These inferences are aimed at enhancing the functionality of smart contracts, allowing them to process dynamic and predictive information rather than relying solely on static data. We discuss the technical aspects of Spectral, including its approach to preserving privacy through zero-knowledge machine learning (zkML), its use of the InterPlanetary File System (IPFS) for decentralized data storage and the role of the SPEC token in Spectral’s governance and staking systems. It also highlights Spectral's significance in adding a new layer of intelligent, verifiable data to smart contracts, marking a step forward in the practical integration of blockchain technology with AI and ML.
1.     Introduction
Most machine learning (ML) models deployed in production for critical use cases today are built by a handful of large, centralized players with proprietary model training techniques. They’re black boxes. Applying them in a smart contract means relying on a single source of truth and creating a single point of failure. As artificial intelligence (AI) and machine learning (ML) become more prevalent and impactful, so too arises a need for a paradigm shift toward transparent, private, secure, and verifiable source of ML inferences (i.e. verifiable machine intelligence).
Spectral is bridging this gap between ML, AI, and blockchain by creating a Machine Intelligence Network - a network that offers high-quality inference feeds for smart contracts. Spectral incentivizes top data scientists and ML engineers to build models that output inference feeds to solve predictive and machine intelligence problems for web3 applications, thereby enabling smart contracts, companies and individuals to find and directly consume the inference feeds they need.
Spectral’s inference feeds surpass the static, observable data feeds of traditional oracle networks. While existing oracle networks provide data feeds, most notably for price data, Spectral’s dynamic, adaptive inference feeds can enhance the logic of smart contracts with predictive insights, allowing them to address a truly diverse variety of use cases. This broadens the scope and the potential market for smart contracts in general, opening new horizons for decentralized applications in web3. To learn more about our vision, please refer to our vision paper.
This paper explains the underlying mechanics of the Spectral Machine Intelligence Network and illustrates the self-sustaining flywheel that enables our novel approach for integrating AI into web3.
2.     Network Design Overview
A. Overview
Spectral aims to reinvent how web3 consumes AI. Our Machine Intelligence Network is underpinned by a system of actors interacting in a self-sustaining flywheel. The sections below explore this mechanism.
B. Key Actors in the Spectral Network
  1. Creators: are web3 companies that post data science/ML challenges, set performance benchmarks, and establish rewards for winning Solvers. They earn a share of the revenue generated by the consumption of inferences from their challenges.
  2. Solvers: solve data science/ML challenges posted by Creators, with the opportunity to win a bounty and receive the majority of revenues from ongoing use of their model inferences by Consumers.
  3. Validators: ensure models do what they claim to be doing, in terms of both integrity and quality. During challenges, Validators use a randomness beacon to create unique test sets for each Solver; Solvers then execute their model and upload responses to IPFS. After a given challenge, Validators check Solver responses against ground truth, verify zkML proofs, and ensure they meet the performance benchmarks set by the challenge Creator.
  4. Consumers: discover inference feeds that align with their data science/ML needs and pay fees to access and ingest them into their own applications.
C. Modeling and Consumption Workflow
Creators, Solvers, Validators and Consumers interact with one another on Spectral’s Machine Intelligence Network and exchange value through inference feeds. Here are the steps by which an inference feed is commissioned, created, validated, and consumed:
  1. A Creator first finds a relevant data science challenge to post to the Spectral Machine Intelligence Network. Once the challenge is deployed, it becomes live for Solvers to build models. All Spectral challenges are open to our global community of Solvers, who can compete against one another to build the highest-performing models for a given challenge.
  2. Solvers commit their models for the challenge - in other words, they designate a specific model version as their official submission for the challenge.
  3. Once a Solver commits a model, he/she will receive a challenge dataset from one of our independent Validators. This is part of the process by which Validators conduct quality assessments for all committed models. It's important to note that Spectral challenges are perpetual. As such, Solvers can build and commit models on a self-defined timeline. A Solver will receive a challenge set from a Validator only when he/she commits their model.
  4. Solvers then compute their unique test Inferences against the challenge data set and submits it to the validator. These inferences are calculated in a privacy preserving manner using zkML. Along with the inferences, the Solver also submits zkML proofs, which the Validator can use to verify that said inferences are outputs from the same model the Solver committed.
  5. Validators are then responsible for evaluating the inferences submitted by the Solver against  the challenge’s performance benchmarks, as well as proofs submitted by the Solver against the inferences. Once the evaluation is carried out, the submission is considered to be complete, and each Solver’s performance scores are compared against those of other Solvers who have also completed their submissions. Each challenge has an evolving leaderboard that registers the top ten highest-performing models at any given point in time, and only those models are eligible for consumption by Consumers.
  6. Once a Solver’s model is deployed, it is publicly displayed in association with an inference feed. A Consumer can request the inference feed through Spectral, either as part of an aggregate feed, or as a custom feed from a particular model.
  7. The Consumer's request is then routed to the top Solvers (in the case of aggregate feeds) or an individual Solver (in the case of custom feeds), and the relevant Solvers can compute and output inferences against the data presented by the Consumer.
  8. Inference is then provided to the consumer. The entire process is seamless and can run entirely from within smart contracts. 
D. Technical Design Principles
To ensure robust adoption of this new ML consumption paradigm, Spectral is built on design principles that promote decentralization, free market dynamism, and privacy protection:
  1. Privacy-Preserving Machine Learning: Spectral uses zkML to preserve privacy during training, evaluation, and consumption of machine learning models. zkML ensures that Solvers' intellectual property is protected and contributes to a secure and trustless network.
  2. Technical Abstraction: While adopting intricate cryptographic and mathematical concepts in zkML, Spectral aims to shield users from technical complexities for a frictionless experience. The focus is on maintaining the integrity of machine intelligence during inference time without compromising Solver experience.
  3. Validator-based Quality Control: To guarantee the quality of machine intelligence, Validators play a crucial role in the network. They utilize a randomness beacon to verify machine learning models in a tamper-proof manner. Validators also evaluate model performance using a diversified set of industry-accepted validation metrics, ensuring that inferences can meet the standards of Consumers in a variety of verticals and scenarios. For transparency and accountability, Validators will openly disclose all relevant intermediary steps.
  4. Customizable Solutions: Acknowledging the context-specific nature of inferences, Spectral supports both scalable and custom machine intelligence models. Spectral can serve inferences for general use by multiple Consumers, or tailor them to the specific requirements of specialized use cases.
  5. Network Effects: The Machine Intelligence Network operates as a built-in, self-scaling flywheel, with incentives that reward Solvers for producing more and better models, and web3 companies for identifying and posting the most relevant predictive and ML needs as challenges. These effects are explored further in the section below.
E. Incentives and Rules of the Network
Each actor in the Spectral network is incentivized to be a part of a self-fulfilling flywheel:
  1. Creators are incentivized to scout and post new challenges because they can unlock an additional source of revenue in case their challenge sees a demand from multiple consumers who find it relevant to solve business problems.
  2. As Solvers accumulate more rewards from high-performing models, they are incentivized to continue building better models across various challenges. Throughout the process, their intellectual property is preserved which makes perpetual incentivization possible. They are also incentivized to compete with existing top modelers in order to partake in fee sharing.
  3. Consumers are incentivized to request inferences on Spectral because they can harness the power of the decentralized community to obtain permissionless models that produce readily ingestible high-quality inferences, all at a lower cost than developing models in-house.
  4. Validators are incentivized to validate the integrity of inferences because it unlocks a stream of steady revenue while giving validators the opportunity to secure their earnings by staking.
These incentives enable a departure from the status quo of ML where a handful of large, centralized players produce critical inferences without transparent validation. Instead, Spectral ushers in a new era: one where any agent (consumers, smart contracts, LLMs, etc.) on the Spectral network can automatically verify, rather than merely trust, inferences and outputs from other ML models and use them readily in their applications.
Diagram: Flywheel effects within the Spectral Network
Additionally, to ensure a competitive, nash equilibrium in our network, Spectral has instituted a set of rules of engagement in our network:
  1. Promotion of competitive flexibility. A Solver can recommit their model an unlimited number of times. If the Solver already had a model live in consumption, the existing model will still remain active while the Solver works on a new model, thus ensuring no disruption in reward distribution
  2. Lack of verification is penalized. A Solver is required to provide proof that a particular inference indeed originated from their valid, benchmarked model. zkML proofs are requested automatically from the network protocol. A Solver that fails to provide these proofs for 3 times will see their model taken out from the list of top Solvers. The Solver has to then submit retroactive proofs for their model to become eligible for consumption again.
  3. Models are tested for real-world relevance. A Solver is subjected to a forward testing window, where their model prediction is compared with the real life outcome for any particular wallet or contract address. This design compels Solvers to be accountable for building the most functionally accurate models.
  4. Evaluation windows run in parallel. Each Solver runs their own clock relative to other Solvers for when they submit their model and its proofs, when the model goes through the forward evaluation window, and when the model turns live for consumption after meeting benchmarks. This mechanism ensures perpetual challenges, where any Solver can join the challenge at any time and emerge victorious in the challenge.
  5. A Solver is incentivized to be early in their submissions. Any challenge on the Spectral Network goes live after more than one model has been committed and evaluated for their performance. Once the challenge is live, any bounty or fee rewards are distributed on a weekly basis. That means, lesser the number of models, the more rewards any Solver will get per week (because a fixed amount of bounty will be distributed every week for a certain number of weeks). Once a challenge gets consistent consumption, more Solvers will start committing models, thus leading to more challenges and lesser rewards per Solver.
  6. Market forces can determine traffic to a model. A consumer can choose to consume a metamodel inference (i.e. an aggregate inference of top performing models) OR individual inferences (sourced directly from an individual model). Should the Solver reputation be excellent, a consumer can choose to continue consuming from a particular model. This incentivizes the Solver to produce the best models possible.
3.     Technical Architecture and Workflows
Diagram: Technical Architecture of the Spectral Network
Spectral Network’s core architecture is a modular, service oriented setup comprising of the components that provide scalability and flexibility:
  1. Embedded wallets: We’ve partnered with Privy to onboard users without the hassles of wallet management. Privy generates embedded custodial wallets on demand and helps onboard users seamlessly using just their email address.
  2. L2: The Spectral Finance machine learning protocol is deployed on the Arbitrum network. Arbitrum is an EVM-compatible Ethereum scaling solution that enables low transaction costs and high volume scalability. Spectral leverages Arbitrum to:
    • Provide storage for inference requests, model challenges, and fraud proofs for Solvers, Challenges, and Validators
    • Allowing registration / tracking of all the actors (Solvers, Consumers, Solvers and Creators)
    • Providing contracts for interacting with Spectral Network (for committing models, consuming inferences, etc.) 
    Each actor interacting with the Spectral Machine learning platform will broadcast transactions to the Arbitrum network for a small gas fee. The Arbitrum blockchain inherits Ethereum's trustless and decentralized features, as all transactions on Arbitrum are settled on Ethereum via the Arbitrum sequencer. 
  3. Account Abstraction: Spectral has designed a novel ERC4337 compliant multi signature wallet contract, enabling all Spectral machine learning platform actors to gaslessly interact with the platform. Normally, conducting a transaction on Arbitrum requires the user to pay a gas fee in ETH. For new users, particularly non-web3-native users, acquiring these tokens can be confusing and daunting. The ERC 4337 standard enables transactions to be processed without the end-user needing to hold or manage ETH on Arbitrum. Users do not broadcast their transactions straight to the Arbitrum blockchain when interacting with Spectral. Instead, they are sent to a separate mempool, where transaction intents are received by a “Paymaster”, who pays the gas fees of transactions on behalf of the initial sender, for a fee in an alternative currency. Spectral has integrated with Alchemy and is acting as a custom paymaster for the Spectral machine learning platform. This Paymaster uses a separate Spectral-only mempool that can facilitate gasless transactions for all users in the machine learning platform, including Solvers and Validators. The Spectral mempool will only fulfill transactions that are related to the Spectral machine learning platform, and will not fulfill transactions that are not interacting with the Spectral smart contracts.
  4. RPC node: Alchemy is used as the node provider to interact with the blockchain.
  5. Pulse: Spectral’s backend application that tracks all the blockchain events relevant to the Spectral Platform (including the requests and responses to and from machine learning models). The Pulse also provides APIs to Solvers and Validators to view transactions on the blockchain.
  6. Modeler CLI: Modeler CLI is responsible for all the Solver interactions with the Spectral Platform during the preparation and submission of the machine learning model phase. It streamlines the experience of participating in the challenge, by generating machine learning model commitments, verifiable computation proofs, and submitting responses to the blockchain.
  7. Nova: This is the service provided by Spectral, serving off-chain machine learning inferences to the blockchain during the consumption phase of the challenge.
  8. Verifiable computation: Spectral uses multiple verifiable computation approaches that allow for proof generation associated with an inference response, so that a Consumer knows that only the committed model generated the inference, and performance benchmarks (as tested) were met at the time of generating the inference. To that end, Spectral utilizes the following techniques:
    • Zero-Knowledge Machine Learning (zkML): Zero knowledge proofs are a way of mathematically verifying a specific information without revealing the contents of it. Essentially they work by correctly predicting the outcome of an equation (in our case, the equivalent circuit of a given ML model). zkML empowers us to verifiably prove that a given inference came from a specific machine learning model that a modeler claims it did.
    • Optimistic Machine Learning (opML): opML is a more efficient way (relative to zkML) to optimistically challenge and verify the ML model’s inferences onchain. A proof is generated only when a specific inference is challenged which then leads to a verification game (similar to optimistic rollups) to verify the integrity of the disputed inference.
    Diagram: zkML workflow using the ezkl library
  9. IPFS: Spectral uses the IPFS protocol for the storage layer to natively store the following:
    • Challenge definition
    • Verifiable computation (storage commitments events)
    • Proofs storage (against inferences generated by a model)
    • Inference submissions (storage of submission events during the model submission stage)
    To further illustrate the technical underpinnings of our network, please refer to the following workflows, which show the interactions between these components from a Consumer, Solvers and Validator perspective.
    Diagram: Solvers Workflow
    Diagram: Validator Workflow
    Diagram: Consumer Workflow
4.     Token Economy
A. Overview of the SPEC Token
The Spectral Network incorporates the SPEC token as an ERC20 standard, aligning it with Ethereum's ERC20 protocol for seamless compatibility across wallets, exchanges, and decentralized applications. This token assumes a pivotal role in the platform's onchain governance, providing holders with voting power and decision-making authority over crucial operational aspects.
In essence, SPEC operates as a governance token, enabling community stakeholders, including Solvers, validators, challenge creators, and users, to actively shape the platform's trajectory through decentralized governance mechanisms. This inclusive approach ensures collective decision-making, mitigating the influence of centralized entities.
The onchain governance facilitated by SPEC allows holders to propose and vote on platform upgrades, modifications, and parameter adjustments. This democratic process fosters community consensus, enhancing transparency and inclusivity in decision-making.
Beyond its governance function, SPEC is integral to the staking mechanisms within the Network. Validators, essential for validating Solvers' submissions and maintaining challenge integrity, are mandated to stake SPEC as collateral. This requirement incentivizes validators to act with integrity, as any misbehavior or malicious actions may result in the forfeiture of their staked tokens.
Additionally, SPEC serves as a medium of exchange and value transfer within the platform. Users accessing machine learning models are required to pay fees, which can be in the form of Ethereum (ETH) or stablecoins, depending on challenge rules. A portion of these fees is allocated as rewards to Solvers, fostering active participation and encouraging high-quality contributions to the Spectral ecosystem. This multifaceted role of SPEC underscores its significance in both the governance and operational dynamics of the Spectral network.
B. Governance
SPEC token serves as a powerful mechanism for actors in the system to actively participate in the governance of the Decentralized Autonomous Organization 
  1. Voting on Network Improvement Proposals: SPEC token holders can actively participate in the governance process by submitting and voting on proposals related to network upgrades, feature implementations, fee structures, and standardized challenge rules. While proposing proposals and voting on them, the DAO shall follow the following guidelines:
    • Voting power is proportional to the amount of SPEC held by a user.
    • There is a designated voting period for each proposal.
    • Token holders express support or opposition during the voting period.
    • Votes can be cast directly or delegated to other addresses.
    • Delegating voting power promotes broader participation.
    • A proposal must meet a minimum quorum requirement to be valid. 
    • Approval requirement is typically 40% of total votes for a successful proposal.
    • If a proposal meets both the quorum and approval requirements, it is considered successful. The proposed changes are then implemented in the protocol according to the terms outlined in the proposal.
  2. Influencing Governance Parameters: Token holders have the ability to propose and vote on adjustments to governance parameters, including voting rules, quorum thresholds, and voting periods, contributing to the agility and responsiveness of the platform's governance model.
  3. Staking for Validation: Validators stake SPEC tokens as collateral to play a critical role in maintaining the integrity of machine learning challenges on Spectral. They are responsible for validating submissions made by participating Solvers and are rewarded for their honest and accurate assessments.
  4. Participating in Smart Contract Upgrades: Through DAO governance, SPEC token holders can vote on proposed changes to Spectral's smart contracts. This allows for the platform's flexibility and adaptability to emerging technologies and security enhancements.
  5. Allocating Funds for Community Initiatives: The DAO may allocate a portion of the platform's funds to support community-driven initiatives, research and development, marketing efforts, or partnerships. This mechanism empowers the community to drive initiatives that benefit the platform as a whole.
  6. Shaping the Future of Spectral: Overall, holding SPEC tokens grants individuals the opportunity to actively shape the future of the Spectral Network by participating in various governance activities, ensuring a transparent and inclusive decision-making process.
C. Staking
Actors within the Spectral network can stake SPEC in different ways, depending on their use case and utility for the token:
  • For Solvers: In the Spectral Network, Solvers play a crucial role by participating in machine learning challenges. To enter a challenge, Solvers are required to stake SPEC tokens, showcasing their commitment to delivering high-quality machine learning models. This staking requirement encourages Solvers to have "skin in the game" and fosters a sense of dedication to producing top-notch models. Solvers also have the option to pool funds through crowdfunding mechanisms, enabling collaborative efforts to meet the staking requirements collectively.
  • For Validators: Validators in the Spectral Network are responsible for maintaining the integrity of machine learning challenges. Validators stake SPEC tokens as collateral, demonstrating their commitment to the network's security. Their role involves validating Solvers' submissions and ensuring adherence to challenge guidelines. Validators face potential slashing if they fail to validate submissions within specified time windows or engage in malicious behavior. This staking mechanism incentivizes validators to act diligently and responsibly, contributing to the overall credibility of the network.
  • For Consumers: Consumers accessing machine learning models within the Spectral Network contribute to the ecosystem by paying fees for model usage. These fees, which can be in the form of SPEC, Ethereum (ETH) or stablecoin, are distributed to Solvers as rewards. Consumers also have the option to stake SPEC tokens, allowing them to receive discounts on network fees or even waive fees entirely. The staking of SPEC tokens by consumers aligns their interests with the success of the network, creating a mutually beneficial relationship between consumers and other participants.
Incentives for staking SPEC are also distinct for every actor as follows:
  • Incentive for Solvers: For Solvers in the Spectral Network, staking SPEC tokens serves as a commitment to challenges. Rewards are tied to the quality and performance of their machine learning models, and specific incentives are distributed during reward epochs. This approach ensures that top-performing Solvers and every benevolent Solver, regardless of ranking, are rewarded with SPEC tokens. The system encourages ongoing engagement and improvement, fostering a collaborative and inclusive environment within the network.
  • Incentive for Validators: Validators in the Spectral Network receive incentives for their active participation and commitment to the platform's security. They stake SPEC tokens as collateral and receive a portion of the revenue generated from users requesting inferences from Solvers. Validators also earn rewards for identifying and exposing malicious behavior, creating a self-regulating environment. In cases where a validator successfully challenges and uncovers misconduct, they are rewarded, reinforcing the network's security and trustworthiness.
  • Incentive for Consumers: Consumers in the Spectral Network can gain incentives by staking SPEC tokens, allowing them to receive discounts on network fees. This aligns consumer interests with the success of the network, creating a symbiotic relationship. The varying incentives for Solvers, validators, and consumers contribute to the overall health and sustainability of the Spectral ecosystem, fostering a balanced and mutually beneficial environment among its diverse participants.
5.     Case Study: Decentralized Credit Score
Spectral offers a new way of consuming machine learning models - a way that’s permissionless, decentralized and yet assuring of performance and quality. Together with web3 paradigms, Spectral is tapping an opportunity to fundamentally disrupt some of the existing constructs and business models known to be opaque, centralized and biased. One such area ripe for disruption is Credit Bureaus and bureau controlled Credit Scores.
A. Problem Statement
Credit, in the form of written obligations, has existed for millennia. The current credit risk assessment methodologies employed in traditional finance have been in vogue since the 1980s and 1990s. Creditworthiness, in traditional finance, is the assessment of an individual or a corporate entity’s predicted probability of default, represented through a credit score and credit rating, respectively. Default refers to a borrower’s failure to repay their debt obligations within a stipulated time.
Web3 has created an entirely new category of finance and financing - Decentralized Finance (DeFi). However, the concept of credit risk assessment has not yet been widely adopted in DeFi. Instead DeFi lending protocols rely on overcollateralization to minimize and mitigate their and the lenders’ exposure to credit risk. While DeFi protocols like Aave and Compound are working sustainably on overcollateralization, the absence of a credit risk infrastructure results in capital inefficiency and creates a playing field advantageous to whales.
B. Solution Overview
Spectral is reimagining the status quo by creating a decentralized credit risk assessment infrastructure, and our first challenge (live now, find more details here) is geared towards this goal. The purpose of this challenge is to create a credit risk assessment framework within the DeFi ecosystem, aiming to seamlessly integrate credit risk evaluation into the world of DeFi - thereby improving the overall capital efficiency and allowing for risk-based pricing and customized loan terms to the borrowers.
C. Defining risk as the probability of liquidation
Despite the traditional concepts of lending and borrowing being applicable in DeFi as well, the concept of default does not exist in DeFi, where borrowing is inherently more fluid than in traditional finance. Multiple loans may be made with no fixed repayment schedule, all of which can be repaid at once or through multiple repayments of varying amounts over an extended period. Overcollateralization results in the DeFi lending protocols (like Compound and Aave) requiring a minimum collateral level to be maintained by borrowers, known as the health factor (calculated as the ratio of the value of collateral to loan amounts). When a borrower’s health factor drops below a specified threshold, their open debt positions can be liquidated by a liquidator (in an event called liquidation), with the sale proceeds being used to repay the loan obligation to the lending protocol. Given the volatile nature of various cryptocurrencies, borrowers need to ensure the value of their provided collateral does not drop or the outstanding loan value does not increase by such an extent to put them at the risk of liquidation. Creditworthy DeFi borrowers aim to maintain a reasonable headroom in their health factors to avoid liquidations and loss of capital. Accordingly, creditworthiness in DeFi is an assessment of the predicted probability of liquidation—the higher the probability of liquidation (i.e., high-risk borrower), the lower the credit score, and vice versa.
D. Challenge Rules
This challenge asks Solvers to predict liquidation (as per binary classification) of an active borrower on Aave v2 Ethereum and Compound v2 Ethereum during the live challenge phase. Liquidation here includes both:
  1. Actual liquidation, where a borrower’s health factor drops below the liquidation threshold triggering a liquidation event; and
  2. Technical liquidation, where a borrower’s health factor drops below a specific threshold 
The latter ensures that risky borrowers are covered even when they are not actually liquidated despite their health factor dropping below the liquidation threshold or where their onchain borrowing behavior is risky enough to trigger a liquidation event with even the slightest market downturn
E. Results and Impact
The above case study is an actual challenge live (as of Jan 2023) on Spectral. For this challenge, all submitted models will be evaluated against the weighted average of the following seven model validation metrics:
  1. Area Under the Receiver Operating Characteristic Curve (AUC/AUROC)
  2. Area Under the Precision-Recall Curve (PR-AUC)
  3. Recall Score
  4. F1 Score
  5. Brier Score (since the lower the Brier Score the better it is, Spectral  uses 1 - Brier Score to score models)
  6. Kolmogorov-Smirnov Statistic (KS Statistic)
  7. Predicted Probability Densities (difference between the median predicted probability of the two labels)
These metrics will be calculated for the predictions (probabilities + labels) returned by the Solver on the testing dataset that will be made available by the Validator. The respective weights and knock-out thresholds for each of the above metrics are as follows:
Additional Details:
  1. The overall score (which is a number between 0 and 100 inclusive) is the weighted average of all seven metrics based on their respective weights (akin to Excel’s SUMPRODUCT function)
  2. The Knock-Out Thresholds indicate the minimum required metric value for a given model, i.e., any model that results in any of the seven metrics being less than the knock-out threshold will be automatically discarded, irrespective of the overall score
Once a model qualifies in the top 10, the Solver can choose to enable their model for consumption on our platform. Any consumer (web3 companies, credit scoring agencies, etc.) can obtain a credit score for any borrower (public key address) and plug it directly into their smart contracts, thereby increasing their confidence in the risk assessment in DeFi funding operations.

Evaluation metrics illustrated above are specific for this challenge, and these metrics can be designated as per the requirements of the Creator that published the challenge on Spectral.
6.     Future Work
The above paper has explored the design behind the Spectral network by highlighting the actors, their incentives, technical architecture, workflows, and tokenomics. With these mechanics, Spectral launched its first challenge in November 2023 and it was met with great traction; more than 250+ Solvers have signed up on our platform to build models for our first challenge, about 10% of these modelers have already committed their models for evaluation and consumption of inference feeds is slated to begin in February 2024.
Over the next few quarters, Spectral will build upon this traction and launch additional challenges, geared to solve industry relevant problems across Web3. The following are examples of challenges Spectral will launch:
  • Solidity Code Generator: This challenge tasks Solvers with building a Generative AI agent that’s capable of generating functional solidity code; Consumers can use this feed to create and deploy smart contracts on demand. The LLM powering the generator would be the best performing LLM model amongst a pool of models committed by Solvers.
  • UniswapX Collaborative Filler: Given the expanding space of pools and the increase of multichain activity, the existing framework for token swaps and bridging is not sustainable. Improving this requires a network of decentralized solvers competing and collaborating to satisfy user intents. Collaborative filling creates ensemble routes between multiple solvers who may have a better solution for fulfilling specific legs of the trade. Solvers compete to build a globally optimized pathfinder that routes across all individual UniswapX filler routes. Successful participants will not only help create a better experience for swappers, liquidity providers, and fillers on Uniswap but will deliver the first truly collaborative approach to decentralized solving networks. This framework will generalize to other intent based solutions. 
  • NFT Recommendation Engine: This challenge focuses on creating an accurate Non-Fungible Token (NFT) recommendation engine. NFTs have become a significant part of the digital marketplace, and there is a need for effective personalization tools to navigate the diverse range of assets. The challenge aims to leverage recommendation engines to assist users in exploring and distinguishing between various NFT categories.
  • Price Prediction: This challenge focuses on predicting the log returns of ETH (Ethereum) in USDC (USD Coin) over a period of up to thirty days in the Uniswap V3 USDC-ETH 0.05% pool. Consumers can leverage this inference feed to forecast short-term returns in the dynamic trading landscape of decentralized exchanges (DEX), specifically Uniswap.
  • NBA Sports Prediction: This challenge focuses on predicting player-specific points, rebounds, and assists, as well as team and opponent team total points in the NBA. Solvers compete to build predictive models using historical player statistics, performance, and other data to generate the best daily predictions.
Building on top of our existing community strength, our future roadmap is geared towards The InferChain — Spectral’s custom L2 built for web3 to integrate with AI in a trustless, verified way. To that end, Spectral's 2024 roadmap unfolds in four phases: Inception (Q1) focuses on launching the Machine Intelligence Network, including decentralized architecture and SPEC governance. Scaling (Q2) aims to grow network usage with advanced features, while Diversification (Q3) enriches the platform with versatile inference feeds. The final phase, The Inferchain (Q4), aims to actualize the Inference Economy by optimizing onchain feed consumption, with an early release in 2024 and a Mainnet launch in 2025.
7.     Conclusion
In conclusion, Spectral stands at the forefront of a transformative era, pioneering an onchain inference economy that seamlessly integrates machine intelligence with blockchain technology. By facilitating a decentralized, transparent, and efficient ecosystem for AI inferences, Spectral is not just enhancing the capabilities of smart contracts but is also introducing an entirely new layer of dynamic, predictive information. This innovation promises to revolutionize the way smart contracts operate, moving beyond static data inputs to a more intelligent, adaptive, and responsive system. The integration of Spectral's inference feeds into smart contracts opens up unprecedented possibilities for automation, decision-making, and interaction within the Web3 space. As a harbinger of this new paradigm, Spectral is poised to unlock a myriad of opportunities, driving the evolution of smart contracts into truly intelligent agents capable of nuanced understanding and action. This leap forward signifies a monumental shift in the blockchain ecosystem, heralding a future where smart contracts are not only self-executing but also self-learning, fueled by the robust, decentralized intelligence network that Spectral enables.