Best example as explained here, is a secure auction with private bids. Requirements include:

  • privacy: any player shouldn’t be able to know about bids of other player.
  • correctness: player with highest bid wins.
  • Independence of Inputs: input by any player is independent of any other inputs. This prevents adversary always winning with the highest bid.
  • fairness: any player shouldn’t be able to cancel a round, if he doesn’t win.

Problem is these properties need to affiliated with each application (auction, election, etc.) independently. A better approach is to define the adversarial behaviour, execution setting, security guarantees.

Adversarial behaviour:

  • Semi-Honest: doesn’t deviate from protocol’s rules but inspects transcript to learn more
  • Malicious: breaks rules and perform arbitrarily
  • Covert: follows any strategy, but averse to being caught, i.e. probabilistic behaviour that adversary who attempts an attack may get caught, can be tuned according to the application.

Corruption Strategy:

  • Static: number of parties in control of adversary are fixed
  • Adaptive: adversary has the ability to gain control of more parties during of the computation. Adversary can decide who and when to corrupt arbitrarily.
  • Proactive: adversary gains control of parties during computation, but loses some of them as well. It means honest parties may become malicious but Malicious can become honest as well.

Protocol Setting: Modular Composition was introduced in [RC99], which proved that a secure MPC is essentially a trusted party running the computation by itself, and thus, several sub-protocols can be modularised and analysed individually.

  • Sequential: when computation by multiple parties is achieved in a standalone setting, i.e. no other computation is being run in parallel. Any messages exchanged between parties are done before/after the computation has finished.
  • Concurrent: Universal Composability

World Model:

  • Ideal: When a trusted third party is used for computation such that model remains secure even if corresponding parties in the MPC are semi-honest. Participating parties can simply send their input to TTP, and receive result of the computation.
  • real: models the real world behaviour of the computation, and is used to reason security of any protocol. Any party that can do exactly same hard as it would have done the same in an Ideal world model, then the corresponding real model is considered an emulator for the Ideal world model.
    • A simple example is computing XOR of two inputs , where Alice has , and Bob has . Even in a ideal world, it’s impossible to not know the other input after receiving output from TTP. Thus, the real world model which exposes private input to other party is secure.

Simulator model: Any real-world model is clearly distinguishable from ideal world model as there’s no communication between the two parties in real world. This can be formulated using a simulator that simulates the real world communication, i.e. Bob has no knowledge of TTP or Alice, and only interacts with simulator, analogous to Alice in real world.

Simulator is treated as a PPT algorithm, and is as powerful as its semi-honest adversary. Any real world protocol for the 2-party computation of functionality is secure, if for each party, there exists a PPT algorithm such that simulated ideal world protocol is computationally indistinguishable from real world protocol.

sim-2pc

Secure 2PC: Any 2 party protocol for computation of a PPT function is secure if there exists a PPT algorithm for such that the ideal world 2PC protocol and original real-world protocol is computationally indistinguishable in the view of ith party.

View: defined as tuple , where is the transcript of the communication, is input by , and is randomness used by P, if any.

Taking an example of OT:

flowchart LR
Alice-..->Sim["Simulator"]
Sim-..->Alice
Alice-.(x0,x1).->Sim
Sim--(sk0,sk1)-->F_OT
Sim--(Enc(x0),Enc(x1))-->F_OT
F_OT--(x_b)-->Bob

Applications

References