Chopsticks: Fork-Free Two-Round Multi-Signatures from Non-Interactive Assumptions

,


Our Contribution
Our work answers the above question in the affirmative. Our contributions are the first two multi-signature schemes that are two-round from a non-interactive assumption without using the Forking Lemma. Both of our schemes are proven secure in the random oracle model based on the DDH assumption. Concretely, we construct 1. a two-round multi-signature scheme with a security loss O(Q S ) and key aggregation, where Q S is the number of signing queries, and 2. the first two-round multi-signature scheme with a fully tight security proof We compare our schemes with existing schemes in Table 1 1 . For roughly 128 bit security, our second scheme can be instantiated with standardized 128 bit secure curves, in contrast to all previous two-round schemes. For our first scheme, its proof is non-tight, but it does not rely on rewinding and has tighter security based on standard, non-interactive assumptions than other non-tight schemes (such as HBMS and Musig2). Hence, as long as the number of signing queries Q S is less than 2 192−128 = 2 64 , we can implement our first scheme with a standardized 192-bit secure curve to achieve 128-bit security, while this is not the case for HBMS and Musig2. We note that our schemes do not have some additional beneficial properties (e.g. having Schnorr-compatible signatures or supporting preprocessing) as in Musig2 [NRS21]. We leave achieving these properties without rewinding as an interesting open problem.

Scheme Assumption
Rounds Key Aggregation Loss Table 1: Comparison of existing multi-signature schemes (top) in the random oracle model with our schemes (bottom). Here, Q H , Q S denote the number of random oracle and signing queries, respectively, denotes the advantage of an adversary against the scheme. The algebraic one-more discrete logarithm (AOMDL) assumption is a (stronger) interactive variant of DLOG.
A crucial building block for our construction is a special kind of DDH-based commitment scheme without pairings. Concretely, our commitment scheme has the following properties.
• It commits to pairs of group elements in a homomorphic way.
• It has a dual-mode property, i.e. indistinguishable keys in statistically hiding and statistically binding mode, with tight multi-key indistinguishability. • The hiding mode offers a special form of equivocation trapdoor, which allows to open commitments to group elements output by the Honest-Verifier Zero-Knowledge (HVZK) simulator of Schnorr-like identification protocols. Such a commitment scheme can be useful to construct other interactive signature variants, and we believe that this is of independent interest. In this paper, we construct the first commitment scheme satisfying the above properties simultaneously without using pairings. Our commitment scheme can be seen as an extension of the commitment scheme in [BCJ08] 2 . Contrary to our scheme, the commitment scheme in [BCJ08] commits to single group elements and no statistically binding mode is shown, which makes it less desirable for our multi-signature constructions. Other previous commitment schemes either have no trapdoor property [GOS06,GS08], or homomorphically commit to ring or field elements [GQ88,Ped92]. To the best of our knowledge, there is only a solution using pairings [Gro09].

Concurrent Work
In a concurrent work (also at Eurocrypt 2023), Tessaro and Zhu [TZ23] also presented (among other contributions) a new two-round multi-signature scheme. Both our work and theirs focus on avoiding interactive assumptions. However, while we additionally remove the security loss, Tessaro and Zhu concentrate on having a partially non-interactive scheme. That is, the first round of the signing protocol is independent of the message being signed. In a nutshell, they generalize Musig2 to linear function families. Then, under a suitable instantiation, the interactive assumption for Musig2 can be avoided. Similar to Musig2, the resulting scheme is partially non-interactive. Still, their scheme inherits the security loss of Musig2 due to (double) rewinding.

Technical Overview
We give an intuitive overview of our constructions and the challenges we solve.

Schnorr-Based Multi-Signatures.
We start by recalling the basic template for multi-signatures based on the Schnorr identification scheme [Sch91]. Let G be a group of prime order p with generator g. We explain the template using the vector space homomorphism F : x → g x mapping from Z p to G, and write both domain and range additively. In a first approach to get a multi-signature scheme, we let each signer i with secret key sk i sample a random r i ∈ Z p , and send R i := F(r i ) to all other signers. Then, an aggregated R is computed as R = i R i . From this R, signers derive challenges c i using a random oracle. Then, each signer computes a response s i = c i sk i + r i and sends this response. Finally, the signature contains R and the aggregated response s = i s i . Verification is very similar to the verification of Schnorr signatures. As each signer in this simple two-round scheme is almost identical to the prover algorithm of the Schnorr identification scheme, one may hope that this scheme is secure. However, early works already noted that it is not [BN06].
While there are concrete attacks against the scheme, for our purposes it is more important to understand where the security proof fails. The proof fails when we try to simulate honest signer without knowing its secret key sk 1 . Following Schnorr signatures and identification, this would be done by sampling R 1 := F(s 1 ) − c 1 pk 1 for random c 1 an s 1 , and then programming the random oracle accordingly at position R. The problem in the multi-signature setting is that we first have to output R 1 , and then the adversary can output the remaining R i , such that he has full control over the aggregate R. Thus, the random oracle may already be defined. Previous works [BN06, MPSW19,BDN18] solve this issue by introducing an additional round, in which all signers commit to their R i using a random oracle. This allows us to extract all R i from these commitments in the reduction, and therefore R has enough entropy to program the random oracle.
A second problem that we encounter in the above approach is the extraction of a solution from the forgery. Namely, to extract a discrete logarithm of pk 1 , we need to rely on rewinding. Some of the well-known schemes [MPSW19,BDN18] even use rewinding multiple times. This leads to security bounds with essentially no useful quantitative guarantee for concrete security.
Towards A Scheme without Rewinding. To avoid rewinding, our first idea is to rely on a different homomorphism F. Namely, we borrow techniques from lossy identification [KW03,AFLT12,KMP16] and use F : x → (g x , h x ) for a second generator h ∈ G. We can then give a non-rewinding security proof for the three-round schemes in [BN06,MPSW19,BDN18]. Concretely, we first switch pk 1 from the range of F to a random element in G 2 , using the DDH assumption. Then, we can argue that a forgery is hard to compute using a statistical argument. We note that this idea is (implicitly) already present in [BN06,FH21]. As we will see, combining it with techniques to avoid the extra round is challenging.
Towards Two-Round Schemes. To go from a three-round scheme as above to a two-round scheme, our goal is to avoid the first round. Recall that this round was needed to simulate R 1 using random oracle programming. Our idea to tackle the simulation problem is a bit different. Namely, going back to the (insecure) two-round scheme, our goal is to send R 1 after we learn c 1 . If we manage to do that, we can simulate by setting it as R 1 := F(s 1 ) − c 1 pk 1 for random s 1 . Of course, just sending R 1 after learning c 1 should only be possible for the reduction. Following Damgård [Dam00], this high-level strategy can be implemented using a trapdoor commitment scheme Com, and sending com 1 = Com(ck, R 1 ) as the first message. The challenges c i are then derived from an aggregated commitment com using the random oracle. Later, the reduction can open this commitment to F(s 1 ) − c 1 pk 1 using the trapdoor for commitment key ck. To support aggregation, the commitment scheme should have homomorphic properties. Note that this approach has been used in the lattice setting in a recent work [DOTT21]. However, implementing such a commitment scheme for (pairs of) group elements is highly non-trivial, as we will see. Also, as already pointed out in [DOTT21], it is hard to make this two-round approach work while avoiding rewinding at the same time. The reason is that a trapdoor commitment scheme can not be statistically binding. But if we want to make use of the statistical argument from lossy identification discussed above, we need that R is fixed before the c i are sampled, which requires statistical binding. With a computationally binding commitment scheme, we end up in a rewinding reduction (to binding) again. Our first technical main contribution is to overcome this issue.
Chopstick One: Our Scheme Without Rewinding. Our idea to overcome the above problem is to demand a dual-mode property from the commitment scheme Com. Namely, there should be an indistinguishable second way to set up the commitment key ck, such that for such a key the scheme is statistically binding. This does not solve the problem yet, because we require ck to be in trapdoor mode for simulation, and in binding mode for the final forgery. The solution is to sample ck in a message-dependent way using another random oracle, which is (for other reasons) already done in earlier works [DEF + 19, DOTT21]. In this way, we can embed a binding commitment key in some randomly guessed random oracle queries, and a trapdoor key in others. Note that this requires a tight multi-key indistinguishability of the commitment scheme. Assuming we have such a commitment scheme, we end up with our first construction, which is presented formally in Section 3.2. Of course, this strategy still has a security loss linear in the number of signing queries due to the guessing argument, but it avoids rewinding, leading to an acceptable security bound. In addition, we can implement the approach in a way that supports key aggregation.
Chopstick Two: Our Fully Tight Scheme. The security loss in our first scheme results from partitioning random oracle queries into two classes, namely queries returning binding keys, and queries returning trapdoor keys. To do such a partitioning in a tight way, we may try to use a Katz-Wang random bit approach [GJKW07]. This simple approach can be used in standard digital signatures. However, it turns out that it does not work for our case. To see this, recall that following this approach, we would compute two message-dependent commitment keys Then, for each message, we would embed a binding key in one branch, and a trapdoor key in the other branch, e.g. ck 0 binding and ck 1 with trapdoor. In the signing protocol, we would abort one of the branches pseudorandomly based on the message. Then we could use the trapdoor branch in the signing, and hope that the forgery uses the binding branch. However, this strategy crucially relies on the fact that the aborting happens in a way that is pseudorandom to the adversary. Otherwise the adversary could always choose the trapdoor branch for his forgery. While we can implement this in a signature scheme, in our multi-signature scheme this fails, because all signers must use the same commitment key to make aggregation possible. At the same time, the aborted branch must depend on secret data of the simulated signer to remain pseudorandom.
To solve this problem, we observe that the above approach uses a pseudorandom "branch selection" and aborts the other branch. Our solution can be phrased as a pseudorandom "branch-to-key matching". Namely, we give each signer two public keys (pk i,0 , pk i,1 ). The signing protocol is run in two instances in parallel. One instance uses ck 0 , and one uses ck 1 as above. More precisely, we commit to R 0 via ck 0 and to R 1 via ck 1 . Then we aggregate and determine the challenges c i,0 and c i, 1 . However, before sending the response s i = (s i,0 , s i,1 ), each signer separately determines which key to use in which instance, i.e. it computes where b i is a pseudorandom bit that each signer i computes independently, and that will be included in the final signature to make verification possible. This decouples the public key that is used from the commitment key that is used. Now we are ready to discuss the implication of this change. Namely, our reduction chooses pk 1,0 honestly and pk 1,1 as a lossy key, i.e. random instead of in the range of F. Then, in each signing interaction, the reduction can match the honest public key with the binding commitment key and the lossy public key with the trapdoor commitment key by setting b 1 accordingly. In this way, we can simulate one branch using the actual secret key, and the other branch using the commitment trapdoor.
For the forgery, we hope that the matching is the other way around, such that binding commitment key and lossy public key match, which makes the statistical argument from lossy identification possible.
Overall, this approach leads to our fully tight scheme, presented in Section 3.3.
The Challenge of Instantiating the Commitment. One may observe that we shifted a lot of the challenges that we encountered into properties of the underlying commitment scheme. This naturally raises the question if such a commitment scheme can be found. In fact, constructing this commitment scheme can be understood as our second technical main contribution. Let us first explain why it is non-trivial to construct such a scheme. The main barrier results from the algebraic structure that we demand. Namely, we need to commit to group elements 3 R ∈ G. A naive idea would be to use any trapdoor commitment scheme, e.g. Pedersen commitments, by first encoding R in the appropriate message space. However, this would destroy all homomorphic properties that we need, and we should not forget that we need a dual-mode property. This brings us to Groth-Sahai commitments [GS08], which can commit to group elements. Indeed, these commitments are homomorphic, and have (indistinguishable from) random keys, such that we can sample them using a random oracle. They are also dual-mode based on DDH, which allows us to use the random self-reducibility of DDH to show tight multi-key indistinguishability. However, the trapdoor property turns out to be the main challenge. To see why this is problematic, note that the opening information of these commitments typically contains elements from Z p that are somehow used as exponents. There are exceptions to this rule, like [Gro09], but they use pairings and the DLIN assumption, which we aim to avoid. This means that the trapdoor should allow us to sample exponents, given a group element R to which we want to open the commitment. This naturally corresponds to having a trapdoor for the discrete logarithm problem, which we do not have.

Our Solution: Weakly Equivocable Commitments.
Our starting point is the commitment scheme for group elements given in [GS08]. Namely, commitment keys correspond to matrices A = (A i,j ) i,j ∈ G 2×2 , and to commit to a message R = g r ∈ G with randomness (α, β) ∈ Z p , one computes That is, setting E = (E i,j ) i,j ∈ Z p such that g Ei,j = A i,j , we can write the discrete logarithm of com as (0, r) t + E · (α, β) t . In binding mode, matrix E is a matrix of rank 1, while E has full rank in hiding mode.
It is easy to see that this commitment scheme to group elements is homomorphic. However, we stress that there is no simple solution to implement a trapdoor for equivocation. To see this, note that if we want to open a commitment com to a message R ∈ G, we need to output a suitable tuple (α, β). If we knew the discrete logarithm of com, then we still would need to know the discrete logarithm of R to find such a tuple. The key insight of our trapdoor construction is that we do not need to be able to open com to any message R . Instead, it will be sufficient if we can open it to messages of the form R = g s · pk c , where we do not know c when we fix the commitment com, but we know pk when setting up A. To explain why this helps, assume we want to find a valid opening (α, β) in this case. Then we need to satisfy It seems like we did not make progress, because even if we know the discrete logarithms of C 0 , C 1 , the term pk c is not known in the exponent. Now, our key idea to solve this is to write and generate A with respect to basis pk in the second row. Namely, we generate A as In this way, the equation that we need to satisfy becomes ,2β g s pk c+d2,1α+d2,2β .
Next, we get rid of the term g s by shifting C 1 accordingly. Namely, recall that we can sample s at random long before we learn c. Setting C 0 = g τ and C 1 = g s pk ρ for random τ, ρ, we obtain the equation Given the trapdoor D = (d i,j ) i,j , this can easily be solved for (α, β) by solving (τ, ρ − c) t = D · (α, β) t . We are confident that such a weak and structured equivocation property can be used in other applications as well, and formally define this type of commitment scheme in Section 3. 1.

Preliminaries
We denote the security parameter by λ ∈ N, and all algorithms get 1 λ implicitly as input. We write x $ ← S if x is sampled uniformly at random from a finite set S, and we write x ← D if x is sampled according to a distribution D. We write y ← A(x), if y is output from (probabilistic) algorithm A on input x with uniform coins. To make the coins explicit, we use the notation y = A(x; ρ). The notation y ∈ A(x) indicates that y is a possible output of A(x). We use standard asymptotic notation, and the notions of negligible functions, and PPT algorithms. If G is a security game, we write G ⇒ b to state that G outputs b. In all our games, numerical variables are implicitly initialized with 0, and lists and sets are initialized with ∅. We define [K] := {1, . . . , K}, and denote the Bernoulli distribution with parameter γ ∈ [0, 1] by B γ .

Multi-Signatures.
We introduce syntax and security for multi-signatures, following the established security notions in the plain public key model [BN06]. The only minor difference to [BN06] is that we assume the list of public keys participating in the signing protocol is given by a set, and not a multi-set. This is inline with other works, e.g., [CKM21,DOTT21]. We opt for using sets for simplicity of exposition, and as a signer can always refuse to sign when its key would be contained twice. We will assume that there is an canonical ordering of given sets, e.g. lexicographically, that allows us to uniquely encode sets P = {pk 1 , . . . , pk N }. For this encoding, we write P throughout the paper. Further, for simplicity of notation, we assume that the honest public key in our security definition is the entry pk 1 in this set. Definition 1 (Multi-Signature Scheme). A (two-round) multi-signature scheme is a tuple of PPT algorithms MS = (Setup, Gen, Sig, Ver) with the following syntax: • Setup(1 λ ) → par takes as input the security parameter 1 λ and outputs global system parameters par. We assume that par implicitly defines sets of public keys, secret keys, messages and signatures, respectively. All algorithms related to SIG take at least implicitly par as input. • Gen(par) → (pk, sk) takes as input system parameters par, and outputs a public key pk and a secret key sk. • Sig = (Sig 0 , Sig 1 , Sig 2 ) is split into three algorithms: -Sig 0 (P, sk, m) → (pm 1 , St 1 ) takes as input a set P = {pk 1 , . . . , pk N } of public keys, a secret key sk, and a message m, and outputs a protocol message pm 1 and a state St 1 . -Sig 2 (St 2 , M 2 ) → σ i takes as input a state St 2 and a tuple M 2 = (pm 2,1 , . . . , pm 2,N ) of protocol messages, and outputs a signature σ. • Ver(P, m, σ) → b is deterministic, takes as input a set P = {pk 1 , . . . , pk N } of public keys, a message m, and a signature σ, and outputs a bit b ∈ {0, 1}. We require that MS is complete, i.e. for all par ∈ Setup(1 λ ), all N = poly(λ), all (pk j , sk j ) ∈ Gen(par) for j ∈ [N ], and all messages m, we have where algorithm MS.Exec is defined in Figure 1.
Definition 2 (Key Aggregation). A multi-signature scheme MS = (Setup, Gen, Sig, Ver) is said to support key aggregation, if the algorithm Ver can be split into two deterministic polynomial time algorithms Agg, VerAgg with the following syntax: • Agg(P) →pk takes as input a set P = {pk 1 , . . . , pk N } of public keys and outputs an aggregated keypk. • VerAgg(pk, m, σ) → b is deterministic, takes as input an aggregated keypk, a message m, and a signature σ, and outputs a bit b ∈ {0, 1}. Precisely, algorithm Ver(P, m, σ) can be written as VerAgg(Agg(P), m, σ).
Definition 3 (MS-EUF-CMA Security). Let MS = (Setup, Gen, Sig, Ver) be a multi-signature scheme and consider the game MS-EUF-CMA defined in Figure 2. We say that MS is MS-EUF-CMA secure, if for all PPT adversaries A, the following advantage is negligible: For simplicity of exposition, we assume that the canonical ordering of sets is chosen such that pk is always at the first position if it is included.

Linear Function Families.
To present our constructions in a modular way, we make use of the abstraction of linear function families. Our definition is close to previous definitions [HKL19, KLR21, CAHL + 22]. As it is not needed for our instantiations, we restrict our setting to vector spaces instead of pseudo modules.

Definition 4 (Linear Function Family). A linear function family (LFF) is a tuple of PPT algorithms
LF = (Gen, F) with the following syntax: • Gen(1 λ ) → par takes as input the security parameter 1 λ and outputs parameters par. We assume that par implicitly defines the following sets: -A set of scalars S par , which forms a field.
-A domain D par , which forms a vector space over S par .
-A range R par , which forms vector space over S par . We omit the subscript par if it is clear from the context, and naturally denote the operations of these fields and vector spaces by + and ·.
• F(par, x) → X is deterministic, takes as input parameters par, an element x ∈ D, and outputs an element X ∈ R. For all parameters par, F(par, ·) realizes a homomorphism, i.e.
We omit the input par if it is clear from the context.
We formalize necessary conditions under which a linear function family can be used to construct so called lossy identification [AFLT12]. Our constructions will rely on such linear function families. We also give a similar definition that captures a similar property in the context of key aggregation.
Definition 5 (Lossiness Admitting LFF). We say that a linear function family LF = (Gen, F) is ε l -lossiness admitting, if the following properties hold: • Key Indistinguishability. For any PPT algorithm A, the following advantage is negligible: • Lossy Soundness. For any unbounded algorithm A, the following probability is at most ε l : Definition 6 (Aggregation Lossy Soundness). We say that a linear function family LF = (Gen, F) satisfies ε al -aggregation lossy soundness, if for any unbounded algorithm A, the following probability is at most ε al :

Assumptions.
We recall the computational assumptions that we need.

Definition 7 (DDH Assumption). Let
GGen be an algorithm that on input 1 λ outputs the description of a prime order group G of order p with generator g. We say that the DDH assumption holds relative to GGen, if for all PPT algorithms A, the following advantage is negligible: In the following, we define an equivalent variant of the DDH assumption, uDDH3. uDDH3 is the 2-fold U 3,1 -Matrix-DDH (MDDH) assumption (with terminology in [EHK + 13]). By its random self-reducibility [EHK + 13, Lemma 1], the 2-fold U 3,1 -Matrix-DDH (MDDH) assumption (namely, the uDDH3 assumption) is tightly equivalent to the U 3,1 -MDDH assumption. By Lemma 1 in [LP20], U 3,1 -MDDH is tightly equivalent to U 1 -MDDH that is the DDH assumption. Hence, the DDH and uDDH3 assumptions are tightly equivalent.

Definition 8 (uDDH3 Assumption). Let
GGen be an algorithm that on input 1 λ outputs the description of a prime order group G of order p with generator g. We say that the uDDH3 assumption holds relative to GGen, if for all PPT algorithms A, the following advantage is negligible:

Constructions
In this section, we present our construction of two-round multi-signatures. First, we give a definition of a special commitment scheme that will be used in both constructions. Then, we present the constructions in an abstract way. For the instantiation, we refer to Section 4.

Preparation: Special Commitments
In this section we define a special kind of commitment scheme. We will make use of such a scheme in our constructions of multi-signatures. Before we give the definition, we explain the desired properties at a high level. First of all, we want to be able to commit to elements R ∈ R in the range of a given linear function family. Second, we need the commitment scheme to be homomorphic in both messages and randomness, allowing us to aggregate commitments during the signing protocol. Third, we need a certain dual mode property, ensuring that we can set up keys either in a perfectly hiding or in a perfectly binding mode. This will allow us to make the commitment key for the forgery binding, while associating a equivocation trapdoor to the keys used to answer signing queries. We emphasize that we do not need a full-fledged equivocation feature. This is because we already know parts of the structure of messages to which we want to open the commitment. Looking ahead, this is the reason we can instantiate the commitment in the DDH setting. Definition 9 (Special Commitment Scheme). Let LF = (LF.Gen, F) be a linear function family and G = {G par }, H = {H par } be families of subsets of abelian groups with efficiently computable group operations ⊕ and ⊗, respectively. Let K = {K par } be a family of sets. An (ε b , ε g , ε t )-special commitment scheme for LF with key space K, randomness space G and commitment space H is a tuple of PPT algorithms CMT = (BGen, TGen, Com, TCom, TCol) with the following syntax: • BGen(par) → ck takes as input parameters par, and outputs a key ck ∈ K par .
• TGen(par, X) → (ck, td) takes as input parameters par, and an element X ∈ R, and outputs a key ck ∈ K par and a trapdoor td. • Com(ck, R; ϕ) → com takes as input a key ck, an element R ∈ R, and a randomness ϕ ∈ G par , and outputs a commitment com ∈ H par . • TCom(ck, td) → (com, St) takes as input a key ck and a trapdoor td, and outputs a commitment com ∈ H par and a state St.
• TCol(St, c) → (ϕ, R, s) takes as input a state St, and an element c ∈ S, and outputs randomness ϕ ∈ G par , and elements R ∈ R, s ∈ D. We omit the subscript par if it is clear from the context.
Further, the algorithms are required to satisfy the following properties: • Homomorphism. For all par ∈ LF.Gen(1 λ ), ck ∈ K par , R 0 , R 1 ∈ R and ϕ 0 , ϕ 1 ∈ G, the following holds: • Good Parameters. There is a set Good, such that membership to Good can be decided in polynomial time, and • Uniform Keys. For all (par, x) ∈ Good, the following distributions are identical: • Special Trapdoor Property. For all (par, x) ∈ Good, and all c $ ← S, the following distributions T 0 and T 1 have statistical distance at most ε t : • Multi-Key Indistinguishability. For every Q = poly(λ) and any PPT algorithm A, the following advantage is negligible: where games KEYDIST 0 , KEYDIST 1 are defined in Figure 3. • Statistically Binding. There exists some (unbounded) algorithm Ext, such that for every (unbounded) algorithm A the following probability is at most ε b :

Our Construction with Key Aggregation
In this section, we construct a two-round multi-signature scheme with key aggregation. Although the scheme will not be tight, the security proof will not use rewinding, leading to an acceptable security loss. For our scheme, we need a lossiness admitting linear function family LF = (LF.Gen, F). It should also satisfy aggregation lossy soundness. Further, let CMT = (BGen, TGen, Com, TCom, TCol) be an (ε b , ε g , ε t )-special commitment scheme for LF with key space K randomness space G and commitment space H. We make use of random oracles H : We give a verbal description of our scheme MS a [LF, CMT]. Formally, the scheme is presented in Figure 9.
Setup and Key Generation. The public parameters of the scheme are par ← LF.Gen(1 λ ) defining the linear function F = F(par, ·). To generate a key (algorithm Gen), a user samples sk := x $ ← D. The public key is pk := X := F(x).

Key Aggregation.
For N users with public keys P = {pk 1 , . . . , pk N }, the aggregated public keypk is computed (by algorithm Agg) asp Signing Protocol. Suppose N users with public keys P = {pk 1 , . . . , pk N } want to sign a message m ∈ {0, 1} * . We describe the signing protocol (algorithms Sig 0 , Sig 1 , Sig 2 ) from the perspective of the first user, which holds a secret key sk 1 = x 1 for public key pk 1 = X 1 . 1. Commitment Phase. The user derives the aggregated public keypk as described above. Then, it derives a commitment key ck := H(pk, m) depending on the message. The user samples an element r 1 $ ← D and sets R 1 := F(r 1 ). Next, it commits to R 1 by sampling ϕ 1 $ ← G and setting com 1 := Com(ck, R 1 ; ϕ 1 ). Finally, it sends pm 1,1 := com 1 to all users. . . , pm 1,N ) be the list of messages output in the commitment phase. Here, message pm 1,i is sent by user i and has the form pm 1,i = com i . With this notation, the user aggregates the commitments via com := i∈[N ] com i . It computes the challenge c and coefficient a 1 via c := H c (pk, com, m) and a 1 := H a ( P , pk 1 ). Then, it computes the response s 1 as Finally, the user sends pm 2,1 := (s 1 , ϕ 1 ) to all users.
3. Aggregation Phase. Let M 2 = (pm 2,1 , . . . , pm 2,N ) be the list of messages output in the response phase. Here, message pm 2,i is sent by user i and has the form pm 2,i = (s i , ϕ i ). To compute the final signature, users aggregate the responses and commitment randomness as follows: They output the final signature σ := (com, s, ϕ).

Verification.
For verification (algorithm Ver), let P = {pk 1 , . . . , pk N } be a set of public keys, m ∈ {0, 1} * be a message, and σ = (com, s, ϕ) be a signature. To verify σ, we determine the aggregated public keypk =X as above. We reconstruct the commitment key ck := H(pk, m), and the challenge c := H c (pk, com, m). Then, we output 1 if and only if the following equation holds: Completeness easily follows from the homomorphic properties of CMT and F. For a similar calculation, we refer to the proof of Lemma 2.
We postpone the proof to Supplementary Material Section A.

Our Tight Construction
In this section, we present a tightly secure two-round multi-signature scheme MS t [LF, CMT] = (Setup, Gen, Sig, Ver). Let us first describe the building blocks that we need. We make use of a lossiness admitting linear function family LF = (LF.Gen, F). Also, let CMT = (BGen, TGen, Com, TCom, TCol) be an (ε b , ε g , ε t )-special commitment scheme for LF with key space K randomness space G and commitment space H. We make use of random oracles H : , and H c : {0, 1} * → S. We give a verbal description of the scheme. Formally, the scheme is presented in Figure 10. Signing Protocol. Suppose N users with public keys P = {pk 1 , . . . , pk N } want to sign a message m ∈ {0, 1} * . We describe the signing protocol (algorithms Sig 0 , Sig 1 , Sig 2 ) from the perspective of the first user, which holds a secret key sk 1 = (x 1,0 , x 1,1 , seed 1 ) for public key pk 1 = (X 1,0 , X 1,1 ). Observe that the bit b 1 determines the link between the responses, challenges, and public keys. Finally, the user sends pm 2,1 := (s 1,0 , s 1,1 , ϕ 1,0 , ϕ 1,1 ) to all users.
Then, we output 1 if and only if the following two equations hold: The proof is an easy calculation and is given in Supplementary Material Section B.  Game G 0 : We define G 0 to be exactly as MS-EUF-CMA A MS , with the following modification: The adversary A does not get access to oracle Sig 2 . Note that in MS, algorithm Sig 2 does not make any use of the secret key or a secret state and can be publicly run using the messages output in Sig 0 and Sig 1 . Therefore, for any adversary in the original game, there is an adversary in game G 0 that simulates oracle Sig 2 and has the same advantage.

(B) ≈ T(A), T(B ) ≈ T(A) and
Before we proceed, let us describe game G 0 in more detail to fix some notation. In the beginning, the game samples parameters par ← LF.Gen(1 λ ). It also samples a public key pk * = (X 1,0 , X 1,1 ) = (F(x 1,0 ), F(x 1,1 )) for a secret key sk * = (x 1,0 , Then, it runs A on input par, pk * with access to the following oracles: • Signing oracles Sig 0 , Sig 1 : The oracles simulate algorithms Sig 0 and Sig 1 on secret key sk * , respectively. Here, A can submit a query Sig 0 (P, m) to start a new interaction in which message m is signed for public keys P = {pk 1 , . . . , pk N }. We assume that pk * = pk 1 , and the oracle adds (P, m) to a list L. • Random oracles H, H b , H c : The random oracles H, H c are simulated honestly via lazy sampling. To this end, the game holds maps h, h c that map the inputs of the respective random oracles to their outputs. Random oracle H b , however, is simulated by forwarding the query to an internal oracleH b with the same interface. This oracle holds a similar mapĥ b , is kept internally by the game, and is not provided to the adversary. Looking ahead, this indirection allows us to distinguish queries to H b that some of the following games issue from the queries that the adversary issues.
In the end, A outputs a forgery (P * , m * , σ * ). The game outputs 1 if and only if pk * ∈ P * ,(P * , m * ) / ∈ L, and Ver(P * , m * , σ * ) = 1. Without loss of generality, we assume that the public key pk * is equal to pk 1 for P * = {pk 1 , . . . , pk N }. To fix notation, write σ * = (σ * 0 , σ * 1 , B * ), Game G 1 : In game G 1 , we add an abort. Namely, the game sets bad := 1, and aborts, if the adversary makes a random oracle query H b (seed 1 , ·). Note that this does not include the queries that are made by the game itself, as these are done using oracleH b directly. As the only information about seed 1 that A gets are the values of H b (seed 1 , ·), and seed 1 is sampled uniformly at random from {0, 1} λ , we can upper bound the probability of bad by Q H b /2 λ . Therefore, we have Game G 2 : In game G 2 , we restrict the winning condition. Namely, the game outputs 0, if the forgery (P * , m * , σ * ) output by A satisfies b * 1 = 1 −H b (seed 1 , P * , m * ). Recall that b * 1 is the bit related to pk 1 = pk * that is included in the signature σ * . Assuming G 1 outputs 1, we know that (P * , m * ) / ∈ L. Therefore, A can only get information about the bitH b (seed 1 , P * , m * ), if it queries the wrapper random oracle H b at this position. However, in this case G 1 would set bad := 1 and abort. Thus, the view of A is independent of bitH b (seed 1 , P * , m * ). We obtain Game G 3 : In game G 3 , the game aborts if (par, where Good is as in the definition of a special commitment scheme. It is clear that Game G 4 : In game G 4 , we change the behavior of random oracle H. Recall that before, to answer a query H(b, P , m) for which the hash value has not been defined, a key ck $ ← K was sampled and returned. In this game, the oracle instead distinguishes two cases. In the first case, if b = 1 −H b (seed 1 , P , m), the game samples (ck, td) ← TGen(par, X 1,1 ). It also stores tr[ P , m] := td, where tr is a map. In the second case, if b =H b (seed 1 , P , m), it samples ck ← BGen(par). In both cases, ck is returned as before. To see that G 3 and G 4 are indistinguishable, we first note that for the first case, the distribution of ck stays the same. This is because we can assume (par, x 1,1 ) ∈ Good due to the previous change. The keys returned in the second case are indistinguishable by the multi-key indistinguishability of CMT. More precisely, we give a reduction B against the multi-key indistinguishability of CMT that interpolates between G 3 and G 4 . The reduction gets as input par, x 1,1 and Q H commitment keys ck 1 , . . . , ck Q H . It simulates G 3 for A with par while embedding the commitment keys in random oracle responses for queries H(b, P , m) with b = 1 −H b (seed 1 , P , m). In the end, it outputs whatever the game outputs 4 . We have Game G 5 : In game G 5 , we change the signing oracles Sig 0 , Sig 1 . Our goal is to eliminate the use of the secret key component x 1,1 . Recall that in previous games, oracle Sig 0 derived a bit b 1 := H b (seed 1 , P , m) and sampled random r 1,0 , r 1,1 and ϕ 1,0 , ϕ 1,1 . Then, these were used to compute commitments com 1,0 , com 1,1 , which where then output together with b 1 . Then, in oracle Sig 1 the values s 1,0 , s 1,1 were computed using the secret keys x 1,b1 , x 1,1−b1 , respectively.
We can easily argue indistinguishability by using the special trapdoor property of CMT Q S0 many times and get |Adv 4 − Adv 5 | ≤ Q S ε t .
Game G 6 : Here we do not abort if (par, x 1,1 ) / ∈ Good anymore. That is, we revert the change introduced in G 3 . It is clear that Game G 7 : In game G 7 , we change how the public key component X 1,1 is computed. Recall that before, X 1,1 is computed as X 1,1 := F(x 1,1 ) for x 1,1 $ ← D. Also, note that due to the previous changes, the value x 1,1 is not used anymore. In G 7 , we sample X 1,1 $ ← R. A direct reduction B against the key indistinguishability of the lossiness admitting linear function family LF shows indistinguishability of G 6 and G 7 . Concretely, B gets par and X 1,1 as input, and simulates G 6 for A. In the end, it outputs whatever the game outputs. We have |Adv 6 − Adv 7 | ≤ Adv keydist B ,LF (λ).

Game G 8 :
In G 8 , we change how H c is executed. Concretely, consider a query H c (pk, com, m, P , B, b) with pk = pk * and b =H b (seed 1 , P , m). For these queries, the game now runs R ← Ext(H(b, P , m), com) and stores r[com, m, P , B] := R, where r is another map. Here, Ext is the (unbounded) extractor for the statistical binding property of CMT. The rest of the oracle does not change. Note that for b of this form, the value ck = H(b, P , m) is sampled as ck ← BGen(par) (cf. G 4 ). We also slightly change the winning condition of the game. Namely, in G 8 , consider the forgery (P * , m * , σ * ) with and let R * 0 , R * 1 ∈ R be the values that are computed during the execution of Ver(P * , m * , σ * ). The game returns 0 if R * We claim that indistinguishability of G 7 and G 8 can be argued using the statistical binding property of CMT. To see this, assume that G 7 outputs 1. Then, due to the change in G 2 , we know that 1 − b * 1 =H b (seed 1 , P * , m * ). Therefore, in the corresponding query H c (pk 1 , com * , m * , P * , B * , 1 − b * 1 ) algorithm Ext was run and the value r[com * , m * , P * , B * ], we have a contradiction to the statistical binding property of CMT. More precisely, we sketch an (unbounded) reduction from the statistical binding property. Namely, this reduction gets as input par and a commitment key ck * . Then, the reduction guesses i H $ ← [Q H ] and i Hc $ ← [Q Hc ]. It simulates game G 8 honestly, except for query i H to random oracle H and query i Hc to random oracle H c . If it had to sample a ck ← BGen(par) in the former query, it instead responds with ck * . Similarly, if it had to run Ext in the latter query, it outputs com to the binding experiment. If these queries are the queries of interest (i.e. query i H was used to derive ck 1−b * 1 and query i Hc was used to derive c * ) for the forgery, and R * , m * , P * , B * ], then the reduction outputs R * Otherwise, it outputs ⊥. It is easy to see that if the reduction guesses the correct queries and the bad event separating G 7 and G 8 occurs, then it breaks the statistical binding property. As the view of A is as in G 8 , and independent of (i H , i Hc ), we obtain Finally, we use lossy soundness of LF to bound the probability that G 8 outputs 1. To do that, we give an unbounded reduction from the lossy soundness experiment, which is as follows.
• The reduction gets par, X 1,1 as input. It samplesî $ ← [Q Hc ]. Then, it simulates G 8 honestly until A outputs a forgery, except for queryî to oracle H c . • Consider this query H c (pk, com, m, P , B, b). The reduction aborts its execution, if the hash value for this query is already defined, or if pk = pk * ∨ b =H b (seed 1 , P , m). Otherwise, it runŝ R ← Ext (H(b, P , m), com) as in G 8  Additionally, it checks if the value H c (pk 1 , com * was defined during querŷ i to H c . If this is not the case, it aborts its execution. Otherwise, it returns s := s * to the lossy soundness game. It is clear that the view of A is independent of the indexî until a potential abort of the reduction. Also, if the reduction does not abort its execution, it perfectly simulates game G 8 for A. Thus, it remains to show that if G 8 outputs 1, then the values output by the reduction satisfy F(s) − c · X 1,1 = R. Once we have shown this, it follows that Adv 8 ≤ Q Hc ε l .
To show the desired property, assume that the reduction does not abort and G 8 outputs 1.
. As the reduction guessed the right query and does not abort, we have

Instantiation
In this section, we show how to instantiate the building blocks that are needed for our constructions in the previous section. Concretely, we give a linear function family and a commitment scheme based on the DDH assumption. Then, we also discuss the efficiency of the resulting multi-signature schemes.

Linear Function Family
We make use of the well-known [KMP16] linear function family LF DDH = (Gen, F) based on the DDH assumption. Precisely, let GGen be an algorithm that on input 1 λ outputs the description of a prime order group G of order p with generator g. Then, Gen runs GGen and outputs 5 par := (g, h) ∈ G 2 for h $ ← G. Then, the set of scalars, domain, range, and function F(par, ·) are given as follows: It is easily verified that this constitutes a linear function family. 5 We omit the description of G from par to make the presentation concise.

Lemma 3.
Assuming that the DDH assumption holds relative to GGen, the linear function family LF DDH is ε l -lossiness admitting, with ε l ≤ 3/p.

Concretely, for any PPT algorithm A there is a PPT algorithm B with T(B) ≈ T(A) and
Adv keydist A,LF DDH (λ) ≤ Adv DDH B,GGen (λ).

Proof.
First, note that the definition of key indistinguishability matches exactly the DDH assumption relative to GGen. Next, we argue that lossy soundness holds. We have to bound the probability The probability that h = g 0 is at most 1/p. Thus, we assume that h is a generator of G. Write X 1 = g x1 and X 2 = h x2 . With probability at most 1/p we have x 1 = x 2 . Assume that x 1 = x 2 . We claim that with these assumptions, the probability that we have to bound is at most 1/p. To see this, assume that there is some (R 1 , R 2 ) such that there exist two different c = c in Z p , such that there exists a s, s ∈ Z p with Then, we can combine both equations and rearrange terms to get contradicting our assumption that x 1 = x 2 . The claim follows.

Proof. Let
A be any unbounded algorithm. We have to bound the probability that where we consider the following experiment. First, (g, h) ← Gen(1 λ ), (X 1 , X 2 ) $ ← G 2 is sampled and g, h, X 1 , X 2 are given to A. Then, A outputs pairs of group elements and exponents ((X 2,1 , X 2,2 ), a 2 ), . . . , ((X N,1 , X N,2 ), a N ). Next, exponent a 1 $ ← Z p are sampled andX 1 ,X 2 are defined as Then, A outputs (R 1 , R 2 ) on input a 1 . A challenge c $ ← Z p is sampled and A outputs s on input c. The probability that h = g 0 is at most 1/p. Thus, we assume that h is a generator of G. Looking at the proof of Lemma 3, we see that it is sufficient to argue that with high probability, X 1 ,X 2 is not of the form (gx, hx) for anyx ∈ Z p . In other words, we have to show that with high probability, the pair X 1 ,X 2 is not in the image of F. Conditioned on that, as in the proof of Lemma 3, the probability above can be bounded by 1/p.
To show this, we fix the exponents x i,j ∈ Z p such that X i,1 = g xi,1 and X i,2 = h xi,2 . The probability that x 1,1 = x 1,2 is at most 1/p. From now on, we condition on This is equivalent to As a 1 is sampled uniformly over Z p after A choses the x i,j and a i , i > 2, the above holds with probability at most 1/p, and the claim follows.

Commitment Scheme
We give a special trapdoor commitment scheme CMT DDH = (BGen, TGen, Com, TCom, TCol) for the linear function family LF DDH . For given parameters of LF DDH , the commitment scheme has key space K := G 3×3 and message space D = G × G. It has randomness space G = Z 3 p and commitment space H = G 3 . Both are associated with the natural componentwise group operations. We describe the algorithms of the scheme verbally.

Concretely, for any PPT algorithm A, there is a PPT algorithm B with T(B) ≈ T(A) and
The homomorphism property is trivial to check. Next, we define the set Good as in the definition of a special commitment scheme. Namely, we define Clearly, for (g, h) ← LF.Gen(1 λ ) and x $ ← Z p , the probability that ((g, h), x) / ∈ Good is at most 2/p. Therefore, ε g ≤ 2/p. In the following we also need the following observation: If ((g, h), x) ∈ Good, then the elements g, h, g x , h x are all generators of G. The rest of proof of the theorem is given in separate lemmas.
Lemma 5. CMT DDH satisfies the uniform keys property of an (ε b , ε g , ε t )-special commitment scheme for LF DDH . Proof. Let (par, x) ∈ Good for par = (g, h). Let (X 1 , X 2 ) = F(x) = (g x , h x ). Consider the distribution of ck for (ck, td) ← TGen(par, (X 1 , X 2 )). Then ck has the form for uniformly random and independent exponents d i,j ∈ Z p (i, j ∈ [3]). As g, X 1 , X 2 are generators, we see that ck is uniform over G 3×3 , proving the claim.

Proof.
First, we make the assumption that in both distributions, the matrix D has full rank. The probability that this does not hold can easily be bounded by 3/p. We can equivalently 6 write T 1 as Using that D is full rank and g, X 1 , X 2 are generators of G, we see that in this distribution, (C 0 , C 1 , C 2 ) is uniform over G 3 . Therefore, this is identically distributed to the distribution that we get from 1 g s , X ρ2 2 h s ), and then finding the unique values (α, β, γ) that satisfy (C 0 , C 1 , C 2 ) = Com(A, (R 1 , R 2 ); (α, β, γ)). We claim that this can be done using (α, β, γ) t := D −1 (τ, ρ 1 + c, ρ 2 + c) t , which is equivalent to distribution T 0 .
To see this, note that (C 0 , C 1 , C 2 ) = Com(A, (R 1 , R 2 ); (α, β, γ)) is equivalent to Using the way we generate (C 0 , C 1 , C 2 ), we see that the g s and h s terms cancel out, and this is equivalent to This concludes the proof.
To finish the proof, let A be any algorithm. We have to bound the probability Note that the probability that Ext outputs ⊥ in this experiment is 1/p, as A 1,1 is uniform in G. We assume that Ext does not output ⊥, and want to show that the above probability conditioned on this event is zero. First, it is easy to see that we have Com(A, (R 1 , R 2 ); (t, 0, 0)) = (C 0 , C 1 , C 2 ). Further, assume that A outputs (R 1 , R 2 ) = (g r 1 , g r 2 ) and ϕ = (α, β, γ) such that Using the definition of Com and BGen, we see that this implies the vector (0, r 1 − r 1 , r 2 − r 2 ) t is in the span of a. As a 0 = 0 this implies that it is the zero vector, showing that R 1 = R 1 and R 2 = R 2 .

Lemma 8. For any PPT algorithm A, there is a PPT algorithm B with T(B) ≈ T(A) and
(c) Reduction B computes A i := g Di , which should be understood componentwise.
With probability at least 1 − 1/p 3 , the matrix H 0 ∈ Z 3×1 p is full rank. We see that If H 0 has full rank, we see that (even for fixed H of this form) the key A i is distributed exactly as a commitment key in KEYDIST 0,CMT DDH , which finishes the proof.

Efficiency
We briefly discuss efficiency of our schemes  For both schemes, we can further reduce the communication complexity. Namely, instead of sampling the commitment randomness ϕ i ∈ Z 3 p directly, each signer i samples a short seed seed i $ ← {0, 1} λ and defines ϕ i = H(seed i ), where H is a random oracle. Later, seed i is sent as an opening instead of ϕ i . Our security proofs still go through, using the entropy of seed 1 and by programming H(seed 1 ) after using the equivocation trapdoor. This reduces the per-signer communication complexity to

Concrete Parameters.
We estimate concrete sizes for keys, communication, and signatures for existing two-round multi-signatures and our schemes. The results are computed using Python scripts (cf. Supplementary Material Section D) and are presented in Table 2. Concretely, we assume 2 20 signing queries and 2 30 hash queries. We compute (1) the security level that is provided for the schemes assuming that the underlying assumption is 128 bit hard (see Table 2, Column "Security"), and (2) the sizes of groups, keys, signatures, and per-signer communication if we want to achieve 128 bit security for the scheme, and instantiate the underlying group based on the security loss (see Table 2, other columns). For (1), the table shows that our schemes are the only ones providing meaningful security guarantees when instantiated with standardized groups. In addition, note that Musig2 comes at the cost of relying on a stronger one-more style assumption. For the setting in (2), our schemes have slightly worse concrete parameter sizes. However, we argue that in practice, the setting in (1) is much more important than (2), because schemes are mostly implemented using standardized groups. The approach in (2) Table 2: Comparison of concrete parameters for existing two-round multi-signature schemes (top) in the random oracle model with our schemes (bottom) in terms. The column "Security" shows the security level that is provided for the schemes assuming that the underlying assumption is 128 bit hard. Other columns show sizes of keys, per-signer communication, and signatures in bytes assuming the schemes are instantiated (using non-standard groups) to have 128 bit security based on the security loss.
avoided, since it leads to the use of groups that are not optimized for computation, and not well-studied in terms of (concrete) security. In terms of computation, consider for example Musig2 compared to our scheme from Section 3.2. Musig2 uses one multi-exponentiation of size two for verification. In our scheme, signatures can be verified using one multi-exponentiation of size three, and two multi-exponentiations of size five. Taking into account that Musig2 would have to use a 1209 bit size group, and our scheme can use standardized groups for which multi-exponentiations are optimized, we expect that our scheme is computationally equally or more efficient.

A Omitted Proof from Section 3.2
Proof of Theorem 1. The proof can be understood as a simplified version of the proof of Theorem 2. Set MS := MS a [LF, CMT], let A be a PPT algorithm. In the following, we present a sequence of games G 0 -G 8 proving the statement. The games are presented in Figures 4 and 5. We fix the notation Game G 0 : Game G 0 is defined as G 0 := MS-EUF-CMA A MS . To fix notation, we recall this game. First, the game samples par ← LF.Gen(1 λ ) and a pair (pk, sk) with sk := x 1 $ ← D and pk := X 1 := F(x). Then, A gets par, pk as input, and access to oracles Sig 0 , Sig 1 . We omit signing oracle Sig 2 . As in the proof of Theorem 2 this does not change the advantage of A, as algorithm Sig 2 does not make any use of the secret key or a secret state and can be publicly run using the messages output in Sig 0 and Sig 1 . Further, A gets access to random oracles H, H a , H c , simulated by the game in a lazy manner, using maps h, h a , h c , respectively. Finally, A outputs a forgery (P * , m * , σ * ). The game outputs 1 if and only if pk * ∈ P * ,(P * , m * ) / ∈ L, and Ver(P * , m * , σ * ) = 1. We assume that the public key pk * is equal to pk 1 for P * = {pk 1 , . . . , pk N }. We write σ * = (com * , s * , ϕ * ), and denote the aggregated key for P * bỹ pk :=X := Agg(P * ). By definition, we have Game G 1 : In this game G 1 , we add a bad event and let the game abort if it occurs. Concretely, consider par, x 1 sampled by the game as described above, and let Good be as in the definition of a special commitment scheme. The game aborts if (par, x 1 ) / ∈ Good. By definition of the special commitment scheme, we have Game G 2 : In this game G 2 , we introduce a map b, that maps inputs to random oracle H to bits. For each new input (pk, m) to H, the bit b[pk, m] is sampled from a Bernoulli distribution with parameters γ := 1/(Q S + 1). Further, the game aborts if any of the follow occurs: • For a signing query Sig 0 (P, m) andpk := Agg(P), it holds that b[pk, m] = 1, or • for the forgery (P * , m * , σ * ) andpk := Agg(P * ), it holds that b[pk, m * ] = 0.
Note that the view of A is independent of the map b until an abort occurs. If the game does not abort, it is exactly like G 1 . Therefore, we can use the fact (1 − 1/z) z ≥ 1/4 for all z ≥ 2 and get Game G 3 : In game G 3 , we change how random oracle H is executed. Consider a query H(pk, m) for which the hash value is not yet defined. Recall that in this case, a bit b[pk, m] is sampled. Then, a commitment key ck has to be returned. In previous games, ck was sampled uniformly via ck $ ← K. Now, depending on this bit, we change how ck is computed. Namely, if b[pk, m] = 0, we sample (ck, td) ← TGen(par, X 1 ) and store the trapdoor td in another map tr[pk, m] := td. On the other hand, if b[pk, m] = 1, we sample ck ← BGen(par).
We argue that games G 2 and G 3 are indistinguishable as follows. First, note that for case b[pk, m] = 0, the distribution of ck stays the same, because we can assume (par, x 1 ) ∈ Good due to previous changes.
For the case b[pk, m] = 1, we use a reduction B against the multi-key indistinguishability of CMT interpolating between G 2 and G 3 . Precisely, B gets as input par, x 1 and Q H commitment keys ck 1 , . . . , ck Q H . It simulates G 2 for A with par while embedding the commitment keys in random oracle responses for queries H(pk, m) with b[pk, m] = 1. In the end, it outputs whatever the game outputs. Clearly, we have Game G 4 : Game G 4 is as G 3 , but we change the execution of oracles Sig 0 , Sig 1 . Concretely, after this change, the secret key x 1 is no longer needed. Consider a query Sig 0 (P, m). Recall that for previous games, in such a query, a commitment key ck := H(pk, m) is computed. Then, values r 1 , ϕ 1 are sampled, and R 1 := F(r 1 ) and a commitment com 1 := Com(ck, R 1 ; ϕ 1 ) is computed. Later, in Sig 1 , s 1 is computed as s 1 := c · a 1 · x 1 + r 1 , where c and a 1 are output by H c and H a as in the scheme. Assuming that the game does not abort in this query, we can assume that b[pk, m] = 0, due to the change in G 2 . This means that the entry td := tr[pk, m] is defined, and was sampled together with ck using TGen(par, X 1 ). We use this in game G 4 as follows: The game no longer samples r 1 and ϕ 1 . Instead, the commitment com 1 is computed via (com 1 , St) ← TCom(ck, td). Later, in Sig 1 , s 1 and ϕ 1 are computed using (ϕ 1 , R 1 , s 1 ) ← TCol(St, c · a 1 ). Applying the special trapdoor property of CMT Q S many times we obtain |Adv 4 − Adv 5 | ≤ Q S ε t .
Game G 5 : In game G 5 , we revert the change we introduced in G 1 . Concretely, the game no longer aborts if (par, x 1 ) / ∈ Good. As before, we get Game G 6 : In game G 6 , we change how the public key X 1 is generated. Recall that it was generated as In this game, we sample X 1 $ ← R instead. Note that due to the change in G 4 , we do not need x 1 anymore. We sketch a simple reduction B against the key indistinguishability of the lossiness admitting linear function family LF to show indistinguishability of G 5 and G 6 . Namely, B gets par and X 1 as input, and simulates G 5 for A. In the end, it outputs whatever the game outputs. We have |Adv 5 − Adv 6 | ≤ Adv keydist B ,LF (λ). Game G 7 : In game G 7 , we want to use the binding property of CMT. To do that, we introduce two changes. First, in oracle queries of the form H c (pk, com, m) we first set ck := H(pk, m). Then, if b[pk, m] = 0, we simulate H c as before. If b[pk, m] = 1, we run the (unbounded) extraction algorithm Ext that exists according to the statistical binding property of CMT. Concretely, we run R ← Ext(ck, com) and store r[pk, com, m] := R, where r is another map. Then, we continue the simulation of H c as before. Second, we change the winning condition of the game. Concretely, after A outputs forgery (P * , m * , σ * ), we parse σ * = (com * , s * , ϕ * ) and compute the aggregated keypk :=X := Agg(P * ) as before. In addition to the verification steps that we had before, we now also compute c * := H c (pk, com * , m * ) and R * := F(s * ) − c * ·X, and check if R * = r[pk, com * , m * ]. If this does not hold, the game outputs 0.
Intuitively, these changes accomplish the following. The game extracts the values R from every commitment that is given by A via random oracle H c , for which the commitment key ck was generated using algorithm BGen (cf. game G 3 ). Then, we force the adversary into using the extracted value for its forgery.
Formally, we argue indistinguishability of G 6 and G 7 using an unbounded reduction to the statistical binding property of CMT. This reduction gets as input par and ck * . It guesses i H The reduction simulates game G 7 for A honestly, except for query i H to random oracle H and query i Hc to random oracle H c . If it had to sample a ck ← BGen(par) in the former query, it instead responds with ck * . If it had to run Ext in the latter query, it outputs com to the experiment. If query i H was used to derive the commitment key used in the forgery and query i Hc was used to derive the challenge c * for the forgery, and R * = r[pk, com * , m * ], then the reduction outputs R * ; ϕ * . Otherwise, it outputs ⊥. Clearly, if the reduction guesses the correct queries and the bad event separating G 6 and G 7 occurs, then it breaks the statistical binding property. The view of A is as in G 7 , and independent of (i H , i Hc ). Therefore, we obtain |Adv 6 − Adv 7 | ≤ Q H Q Hc ε b .
Game G 8 : In game G 8 , we introduce another abort. Namely, the game aborts in a query H a ( P , pk), for which pk = pk 1 and the hash value is not yet defined, but forpk := Agg(P), there is some com, m such that H c (pk, com, m) is already defined. The probability of this bad event is easily bounded. First, assume that pk 1 = X 1 is not the zero vector in R. The probability that X 1 is the zero vector is at most 1/|R|. Then, fix such a query H a ( P , pk = X 1 ), and any previous query to oracle H c . The bad event can only occur if the input of the latter query starts withX, where a 1 X 1 = N i=2 a i X i −X. As X 1 is not the zero vector, the value a 1 X 1 is uniform over the span of X 1 . Further the values on the right-hand side are fixed before a 1 is sampled, assuming that the bad event occurs. Thus, the probability of the bad event for this pair of queries is at most 1/|S|. We get |Adv 7 − Adv 8 | ≤ 1 |R| + Q Ha Q Hc |S| .
Note that this change ensured that for the forgery output by A, the query defining coefficient a 1 occurred before the query defining the challenge c * .
To bound the probability that G 8 outputs 1, we give an unbounded reduction from the aggregation lossy soundness of LF.
• The reduction gets as input parameters par and an element X 1  • If the queryî Hc to oracle H c occurs before the queryî Ha to oracle H a , the reduction aborts its execution.
• Consider the queryî Ha to oracle H a . If the hash value is already defined, the reduction aborts its execution. Else, let this query be H a ( P , pk). If pk = pk 1 , the reduction aborts. Otherwise, it first parses P = {pk 1 , . . . , pk N } and queries a i := H a ( P , pk i ) for all 2 ≤ i ≤ N . Then it outputs the pairs (pk 2 , a 2 ), . . . , (pk N , a N ) to the aggregation lossy soundness experiment. It gets as input a 1 , sets h a [ P , pk] := a 1 , and continues the simulation as in G 8 .
• Consider the queryî Hc to oracle H c . Let this query be H c (pk, com, m). The reduction aborts its execution, if the hash value for this query is already defined. Else, it queries ck := H(pk, m). If b[pk, m] = 0, it aborts its execution. Otherwise, it runs R ← Ext(ck, com) as in G 8 . It outputs R to the aggregation lossy soundness experiment and obtains a value c in return. Then, it sets h c [pk, com, m] := c and continues the simulation as in G 8 .
• When A outputs the forgery (P * , m * , σ * ), the reduction runs all the verification steps in G 8 . Additionally, it checks if the value H c (pk, com * , m * ) was defined during queryî Hc to H c , and the value H a ( P * , pk 1 ) was defined duringî Ha to oracle H a . If this is not the case, it aborts its execution. Otherwise, it returns s := s * to the aggregation lossy soundness experiment.
Clearly, unless the reduction aborts due to wrong guessing of the indicesî Ha ,î Hc , the view of A is exactly as in G 8 . Before any such abort, A's view is independent of the indicesî Ha ,î Hc . Also, it is clear that if the reduction does not abort, it outputs a valid solution to the aggregation lossy soundness experiment. Therefore, we get Adv 8 ≤ Q Ha Q Hc ε al , and the statement is proven.

B Omitted Proofs and Figures from Section 3.3
Proof of Lemma 2. Consider the variables given in verification and an honest execution of the protocol. Concretely, let P = {pk 1 , . . . , pk N } be a set of public keys, m ∈ {0, 1} * be a message, and σ = (σ 0 , σ 1 , B) be a signature computed by an honest execution of the signing protocol specified by algorithms Sig 0 , Sig 1 , Sig 2 . Write B = b 1 . . . b N , σ 0 = (com 0 , ϕ 0 , s 0 ) and σ 1 = (com 1 , ϕ 1 , s 1 ). Write the public keys pk i as pk i = (X i,0 , X i,1 ). Then, we can use the homomorphic properties of F to obtain Using this, the homomorphic properties of Com, and the definition of ϕ 0 , it follows that This shows that the first verification equation holds. The proof for the second equation is similar.

D Scripts for Parameter Computation
Listing 1: Python Script to compute security levels of two-round multi-signatures for a fixed group sizes. A discussion is given in Section 4.3.