It is well known in the scientific community that Verifiable Computing is a set of cryptographic protocols that enable two actors, a Prover and a Verifier, to produce a mathematical proof (by the Prover) that guarantees that a requested computation was done correctly (verified by the Verifier).
An example of such protocols are Zero-Knowledge proof (ZKP) systems, that have jumped from theory to practice in the last decade. Specifically, in ZKP, a Prover computes proof of a given statement, which a Verifier consequently checks without disclosing any information beyond its truth. This property is known as “Zero Knowledge” and it means that the proof of an argument reveals nothing but its own validity. Currently, ZKP technologies are routinely applied in blockchain and distributed ledger applications, mainly for protecting the privacy of transactions or verifying the correctness of off-chain operations.
All general-purpose ZKP systems based on SNARKs require that the computation and the associated input data be represented in an arithmetic circuit. To be processed, the circuit is translated into an intermediate format (like R1CS, AIR, PLONKish, etc.) that is fed into the prover to produce a corresponding correctness proof. In general, ZK proof systems greatly vary in their performances, and the setup and security assumptions they require. From a very high-level point of view, the relevant parameters are Prover and Verifier run times, proof size, and a trusted setup phase. In most modern ZKP systems, the times required to run Prover and Verifier are linearly proportional to the number of gates of the circuit, and the proof sizes are proportional to the square root or the logarithm of the number of gates. So this is all more or less good for some of the current applications of ZKP, from rollups to confidential transactions to privacy focused blockchains. But…