Introduction to Cryptographic Primitives

Folderlabs
5 min readMar 14, 2021

Overview

In this article we will briefly review numerous cryptographic primitives (or schemes) that are used in blockchain protocols. Then we will discuss the potential cryptographic schemes we may be using in designing the folder Protocol for the decentralized storage.

What are Cryptographic primitives?

Cryptographic primitives are low-level cryptographic algorithms that are thoroughly tested and often are used to design systems that use cryptography security protocols.

Typically, a cryptographic primitive is designed to do one specific task or atomic task in a clearly defined manner and it is highly reliable to execute. These primitives become fundamental building blocks of a secure cryptographic system. Some of the examples of cryptographic primitives are:

  • One way hash functions (e.g. SHA256)
  • PKI or Public key cryptography ( e.g. RSA)
  • Digital Signatures (used for verifying author of a message or content)
  • Commitment scheme (allows one to commit to a chosen value while keeping it hidden to others, with the ability to reveal it later)
  • Cryptographic pseudo-random number generators

Cryptographic primitives are in general defined by their security properties.

zkSNARKs

zkSNARKs stand for zero knowledge succinct non-interactive argument of knowledge or zero knowledge proofs for short. They are popular in blockchain protocols because of their security properties like (1) the each of two parties in a transaction is able to verify to each other that they have a particular set of information, while at the same time not revealing what that information is (2) they are short in size hence fast to verify (3) non interactive: a proof that was constructed without the help of a verifier.

BLS Signatures

Boneh–Lynn–Shacham (BLS) signature scheme allows a user to verify that a signer is authentic. The scheme uses a bi-linear pairing for verification, and the signatures are group elements in some elliptic curve group. BLS signatures are approximately two times shorter than Schnorr or ECDSA signature scheme — signature is not a pair, but a single curve point. BLS scheme allows us to combine all signatures in a block to a single signature.

In general, pairing function is very inefficient and complicated. This is why BLS signature verification is more harder than ECDSA signature verification. In general we want the pairing to be efficient and signature verification faster and on the other hand, we don’t want it to reveal any information about our secret key, hence security proof is very hard.

Verifiable Random Functions (VRFs)

A Verifiable Random Function (VRF) is the public-key version of a keyed cryptographic hash. Only the holder of the private key can compute the hash, but anyone with public key can verify the correctness of the hash. VRFs are useful for preventing enumeration of hash-based data structures. A VRF maps an input to verifiable psuedorandom outputs. VRFs provide deterministic pre-commitments which can be revealed at a later time using proofs which can only be generated by a private key. This is useful for providing a 1:1 mapping of personally identifiable information (PII) like names, email addresses, phone numbers to some random values which can be committed to in advance through a time-stamping service. VRFs were Introduced by Micali (founder of Algorand blockchain), Rabin, and Vadhan in ’99. Today the primitive is used in various cryptographic schemes, protocols, and systems.

A VRF is a triple of algorithms Keygen, Evaluate, and Verify.

  • Keygen(r) → (VK, SK). On a random input, the **key generation algorithm produces a verification key VK and a secret key SK pair.
  • Evaluate(SK, X) → (Y, ⍴). The evaluation algorithm takes as input the secret key SK, a message X and produces a pseudorandom output string Y and a proof .
  • Verify(VK, X, Y, ⍴) → 0/1. The verification algorithm takes as input the verification key VK, the message X, the output Y and the proof . It outputs 1 if and only if it verifies that Y is the output produced by the evaluation algorithm on inputs SK and X.

Verifiable Delay Functions (VDFs)

Randomness on blockchain is very difficult. Often, blockchain developers use future block hashes, block difficulty or timestamps for their randomness sources but these are not secure from being manipulations and can lead to vulnerabilities & malicious behavior. Another common problem is electing validator nodes in the proof of stake protocols. Here, a miner can affect when they will be chosen to mine a block by being able to influence or predict randomness.

Boneh, et al., define Verifiable Delay Functions are a way to slow things down verifiably. They define them as below.

VDFs are functions that require a moderate amount of sequential computation to evaluate, but once a solution is found, it is easy for anyone to verify that it is correct.

VDFs introduce a forced time delay to an output so that malicious actors can’t influence it by predicting future values. A valid VDF must have the following properties:

  • Sequential: Anyone can compute function f(x) in ‘t’ sequential time steps, but no adversary with a large number of processors can distinguish the output of f(x) from random in significantly fewer steps. This is really important if we want to tackle the problems mentioned above. A malicious actor shouldn’t be able to distinguish the output from an intermediate state in the computation of the VDF giving him advantage over “what comes next”.
  • Efficiently verifiable: Given the output y, any observer can verify that y = f(x) in a short amount of time (specifically log(t)).

Depth Robust Graphs

A directed acyclic graph (DAG) on n nodes with d-indegree is (n,α,β,d) depth robust graph (DRG) if every subgraph of αn nodes contains a path of length at least βn.

DRGs were first introduced by Erd ̋os, Graham, and Szemeredi, who constructed a family of (n,α,β,clogn)-depth robust graphs using extreme constant-degree bipartite expander graphs, for some constants α,β,c that satisfy specific constraints.

Fisch et al. additionally says in their work, Tight Proofs of Space and Replication, the security of filecoin protocol depends on the best known attacks on DPGs.

An (e,d)-depth-robust directed acyclic graph (DAG) has the property that after removing any subset of up to e nodes (and adjacent edges) there remains a directed path of length d.

Vector Commitments (VC)

Commitment schemes are one of the most fundamental cryptographic primitives. They have two basic properties:

  • Hiding guarantees that a commitment reveals no information about the underlying message.
  • Binding ensures that one cannot change its mind about the committed message; namely, it is not possible to open a commitment to two distinct values m’≠ m.

Vector Commitments are a special class of commitment schemes in which one can commit to a vector $\vec v$, of length n and to later open the commitment at any position i ∈ [n]. The distinguishing feature of VCs is that both the commitment and an opening for a position i have size independent of n. In terms of security, VCs should be position binding, i.e., one cannot open a commitment at position i to two distinct values $v_{i} ≠ v’_{i}.$

Folder Protocol

At Folder Labs, we are investigating the available cryptographic primitives to enable scalability and performance in the decentralized storage without scarifying security properties. Our current research is strongly recommending to consider special Vector Commitments as the atomic unit of cryptographic primitives in implementing the Folder Protocol. One of the future issues we may face is that these schemes are very recent and the attacks on the core algorithms aren’t well studied in the academics so there is a limited knowledge available around attack vectors for a blockchain platform implementing these special cryptographic commitment schemes. This is why we are also considering to have second cryptographic primitives implemented in the core Folder Protocol algorithm.

--

--

Folderlabs

Folder Protocol(FOL) is decentralized storage project with layer-2 solutions, seeking to safely store and send larger batches of data more adequately.