The MemoryStark proving process mainly includes Trace Generation, Constraints Evaluation, and ZK Circuit Evaluation. The core goal of the entire process is to ensure that memory access operations occur in order of time and address, and that the value read is consistent with the previously written value. Details are as follows:
The memory operations MemoryOp record all read/write activities, including:
fn into_row<F: Field>(self) -> [F; NUM_COLUMNS]
Each MemoryOp represents one memory access and is converted into an array of length NUM_COLUMNS, where:
fn generate_trace_row_major(&self, mut memory_ops: Vec<MemoryOp>) -> Vec<[F; NUM_COLUMNS]>
pub fn generate_first_change_flags_and_rc<F: RichField>(trace_rows: &mut [[F; NUM_COLUMNS]])
fn generate_trace_col_major(trace_col_vecs: &mut [Vec<F>])
In STARK proofs, we must ensure:
fn eval_packed_generic<FE, P, const D2: usize>
// Ensure filter is only 0 or 1
let filter = local_values[FILTER];
yield_constr.constraint(filter * (filter - P::ONES));
// Ensure first_change happens only once
yield_constr.constraint(context_first_change * not_context_first_change);
yield_constr.constraint(segment_first_change * not_segment_first_change);
yield_constr.constraint(virtual_first_change * not_virtual_first_change);
yield_constr.constraint(address_unchanged * not_address_unchanged);
// Ensure correct range_check calculation
let computed_range_check = context_first_change * (next_addr_context - addr_context - one)
+ segment_first_change * (next_addr_segment - addr_segment - one)
+ virtual_first_change * (next_addr_virtual - addr_virtual - one)
+ address_unchanged * (next_timestamp - timestamp);
yield_constr.constraint_transition(range_check - computed_range_check);
// Ensure read value matches previously written value
for i in 0..VALUE_LIMBS {
yield_constr.constraint_transition(
next_is_read * address_unchanged * (next_values_limbs[i] - value_limbs[i]),
);
}
In STARK proofs, we not only need conventional constraints, but must also perform the same checks at the circuit level to be compatible with zk-SNARKs.
fn eval_ext_circuit
// Ensure first_change variables are boolean
let context_first_change_bool =
builder.mul_extension(context_first_change, not_context_first_change);
yield_constr.constraint(builder, context_first_change_bool);
// Ensure address change correctness
let segment_first_change_check =
builder.mul_extension(segment_first_change, addr_context_diff);
yield_constr.constraint_transition(builder, segment_first_change_check);
// Ensure correct range_check value
let range_check_diff = builder.sub_extension(range_check, computed_range_check);
yield_constr.constraint_transition(builder, range_check_diff);
To efficiently handle RANGE_CHECK, Lookup Arguments are used.
fn lookups(&self) -> Vec<Lookup<F>> {
vec![Lookup {
columns: vec![Column::single(RANGE_CHECK)],
table_column: Column::single(COUNTER),
frequencies_column: Column::single(FREQUENCIES),
filter_columns: vec![None],
}]
}
LogicSNARK is a proof system for basic logical operations. The process includes trace generation, constraint construction, STARK proof generation, and recursive circuit verification, ultimately allowing for efficient proofs of correctness for logical operations. The STARK proof process typically consists of the following stages:
In the STARK proof system, the trace is the core of the computation process. It records the inputs, computation steps, and outputs, and is ultimately used to generate polynomial representations.
The code first defines the four supported logical operations:
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
pub(crate) enum Op {
And,
Or,
Xor,
Nor,
}
impl Op {
pub(crate) fn result(&self, a: u32, b: u32) -> u32 {
match self {
Op::And => a & b,
Op::Or => a | b,
Op::Xor => a ^ b,
Op::Nor => !(a | b),
}
}
}
The Operation struct stores a logical operation:
#[derive(Debug, Clone)]
pub(crate) struct Operation {
operator: Op,
input0: u32,
input1: u32,
pub(crate) result: u32,
}
impl Operation {
pub(crate) fn new(operator: Op, input0: u32, input1: u32) -> Self {
let result = operator.result(input0, input1);
Operation { operator, input0, input1, result }
}
}
The trace is a 2D array where each row represents a logical operation:
fn generate_trace_rows(
&self,
operations: Vec<Operation>,
min_rows: usize,
) -> Vec<[F; NUM_COLUMNS]> {
let len = operations.len();
let padded_len = len.max(min_rows).next_power_of_two();
let mut rows = Vec::with_capacity(padded_len);
for op in operations {
rows.push(op.into_row());
}
// Pad to next power of two
for _ in len..padded_len {
rows.push([F::ZERO; NUM_COLUMNS]);
}
rows
}
Constraints are the core of STARK proofs, ensuring that logical operations follow the correct mathematical rules throughout computation.
pub const IS_AND: usize = 0;
pub const IS_OR: usize = IS_AND + 1;
pub const IS_XOR: usize = IS_OR + 1;
pub const IS_NOR: usize = IS_XOR + 1;
pub const INPUT0: Range<usize> = (IS_NOR + 1)..(IS_NOR + 1) + VAL_BITS;
pub const INPUT1: Range<usize> = INPUT0.end..INPUT0.end + VAL_BITS;
pub const RESULT: Range<usize> = INPUT1.end..INPUT1.end + PACKED_LEN;
During STARK evaluation, each logical operation must satisfy:
\[\text{result} = \text{sum\_coeff} \cdot (\text{input0} + \text{input1}) + \text{and\_coeff} \cdot (\text{input0} \& \text{input1}) + \text{not\_coeff} \cdot \text{MASK}\]
Where: sum_coeff, and_coeff, and not_coeff are determined by the values in IS_AND, IS_OR, etc.
Constraint implementation:
fn eval_packed_generic<FE, P, const D2: usize>(
&self,
vars: &Self::EvaluationFrame<FE, P, D2>,
yield_constr: &mut ConstraintConsumer<P>,
) {
let lv = vars.get_local_values();
let sum_coeff = lv[columns::IS_OR] + lv[columns::IS_XOR] - lv[columns::IS_NOR];
let and_coeff = lv[columns::IS_AND] - lv[columns::IS_OR] - lv[columns::IS_XOR] * FE::TWO + lv[columns::IS_NOR];
let not_coeff = lv[columns::IS_NOR];
for (result_col, x_bits_cols, y_bits_cols) in izip!(
columns::RESULT,
columns::limb_bit_cols_for_input(columns::INPUT0),
columns::limb_bit_cols_for_input(columns::INPUT1),
) {
let x: P = limb_from_bits_le(x_bits_cols.clone().map(|col| lv[col]));
let y: P = limb_from_bits_le(y_bits_cols.clone().map(|col| lv[col]));
let x_op_y = sum_coeff * (x + y) + and_coeff * (x & y) + not_coeff * FE::from_canonical_u32(u32::MAX);
yield_constr.constraint(lv[result_col] - x_op_y);
}
}
This part follows from earlier documentation in this series.
Recursive circuits are used to efficiently verify STARK proofs:
fn eval_ext_circuit(
&self,
builder: &mut CircuitBuilder<F, D>,
vars: &Self::EvaluationFrameTarget,
yield_constr: &mut RecursiveConstraintConsumer<F, D>,
)
The MemoryStark proving process mainly includes Trace Generation, Constraints Evaluation, and ZK Circuit Evaluation. The core goal of the entire process is to ensure that memory access operations occur in order of time and address, and that the value read is consistent with the previously written value. Details are as follows:
The memory operations MemoryOp record all read/write activities, including:
fn into_row<F: Field>(self) -> [F; NUM_COLUMNS]
Each MemoryOp represents one memory access and is converted into an array of length NUM_COLUMNS, where:
fn generate_trace_row_major(&self, mut memory_ops: Vec<MemoryOp>) -> Vec<[F; NUM_COLUMNS]>
pub fn generate_first_change_flags_and_rc<F: RichField>(trace_rows: &mut [[F; NUM_COLUMNS]])
fn generate_trace_col_major(trace_col_vecs: &mut [Vec<F>])
In STARK proofs, we must ensure:
fn eval_packed_generic<FE, P, const D2: usize>
// Ensure filter is only 0 or 1
let filter = local_values[FILTER];
yield_constr.constraint(filter * (filter - P::ONES));
// Ensure first_change happens only once
yield_constr.constraint(context_first_change * not_context_first_change);
yield_constr.constraint(segment_first_change * not_segment_first_change);
yield_constr.constraint(virtual_first_change * not_virtual_first_change);
yield_constr.constraint(address_unchanged * not_address_unchanged);
// Ensure correct range_check calculation
let computed_range_check = context_first_change * (next_addr_context - addr_context - one)
+ segment_first_change * (next_addr_segment - addr_segment - one)
+ virtual_first_change * (next_addr_virtual - addr_virtual - one)
+ address_unchanged * (next_timestamp - timestamp);
yield_constr.constraint_transition(range_check - computed_range_check);
// Ensure read value matches previously written value
for i in 0..VALUE_LIMBS {
yield_constr.constraint_transition(
next_is_read * address_unchanged * (next_values_limbs[i] - value_limbs[i]),
);
}
In STARK proofs, we not only need conventional constraints, but must also perform the same checks at the circuit level to be compatible with zk-SNARKs.
fn eval_ext_circuit
// Ensure first_change variables are boolean
let context_first_change_bool =
builder.mul_extension(context_first_change, not_context_first_change);
yield_constr.constraint(builder, context_first_change_bool);
// Ensure address change correctness
let segment_first_change_check =
builder.mul_extension(segment_first_change, addr_context_diff);
yield_constr.constraint_transition(builder, segment_first_change_check);
// Ensure correct range_check value
let range_check_diff = builder.sub_extension(range_check, computed_range_check);
yield_constr.constraint_transition(builder, range_check_diff);
To efficiently handle RANGE_CHECK, Lookup Arguments are used.
fn lookups(&self) -> Vec<Lookup<F>> {
vec![Lookup {
columns: vec![Column::single(RANGE_CHECK)],
table_column: Column::single(COUNTER),
frequencies_column: Column::single(FREQUENCIES),
filter_columns: vec![None],
}]
}
LogicSNARK is a proof system for basic logical operations. The process includes trace generation, constraint construction, STARK proof generation, and recursive circuit verification, ultimately allowing for efficient proofs of correctness for logical operations. The STARK proof process typically consists of the following stages:
In the STARK proof system, the trace is the core of the computation process. It records the inputs, computation steps, and outputs, and is ultimately used to generate polynomial representations.
The code first defines the four supported logical operations:
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
pub(crate) enum Op {
And,
Or,
Xor,
Nor,
}
impl Op {
pub(crate) fn result(&self, a: u32, b: u32) -> u32 {
match self {
Op::And => a & b,
Op::Or => a | b,
Op::Xor => a ^ b,
Op::Nor => !(a | b),
}
}
}
The Operation struct stores a logical operation:
#[derive(Debug, Clone)]
pub(crate) struct Operation {
operator: Op,
input0: u32,
input1: u32,
pub(crate) result: u32,
}
impl Operation {
pub(crate) fn new(operator: Op, input0: u32, input1: u32) -> Self {
let result = operator.result(input0, input1);
Operation { operator, input0, input1, result }
}
}
The trace is a 2D array where each row represents a logical operation:
fn generate_trace_rows(
&self,
operations: Vec<Operation>,
min_rows: usize,
) -> Vec<[F; NUM_COLUMNS]> {
let len = operations.len();
let padded_len = len.max(min_rows).next_power_of_two();
let mut rows = Vec::with_capacity(padded_len);
for op in operations {
rows.push(op.into_row());
}
// Pad to next power of two
for _ in len..padded_len {
rows.push([F::ZERO; NUM_COLUMNS]);
}
rows
}
Constraints are the core of STARK proofs, ensuring that logical operations follow the correct mathematical rules throughout computation.
pub const IS_AND: usize = 0;
pub const IS_OR: usize = IS_AND + 1;
pub const IS_XOR: usize = IS_OR + 1;
pub const IS_NOR: usize = IS_XOR + 1;
pub const INPUT0: Range<usize> = (IS_NOR + 1)..(IS_NOR + 1) + VAL_BITS;
pub const INPUT1: Range<usize> = INPUT0.end..INPUT0.end + VAL_BITS;
pub const RESULT: Range<usize> = INPUT1.end..INPUT1.end + PACKED_LEN;
During STARK evaluation, each logical operation must satisfy:
\[\text{result} = \text{sum\_coeff} \cdot (\text{input0} + \text{input1}) + \text{and\_coeff} \cdot (\text{input0} \& \text{input1}) + \text{not\_coeff} \cdot \text{MASK}\]
Where: sum_coeff, and_coeff, and not_coeff are determined by the values in IS_AND, IS_OR, etc.
Constraint implementation:
fn eval_packed_generic<FE, P, const D2: usize>(
&self,
vars: &Self::EvaluationFrame<FE, P, D2>,
yield_constr: &mut ConstraintConsumer<P>,
) {
let lv = vars.get_local_values();
let sum_coeff = lv[columns::IS_OR] + lv[columns::IS_XOR] - lv[columns::IS_NOR];
let and_coeff = lv[columns::IS_AND] - lv[columns::IS_OR] - lv[columns::IS_XOR] * FE::TWO + lv[columns::IS_NOR];
let not_coeff = lv[columns::IS_NOR];
for (result_col, x_bits_cols, y_bits_cols) in izip!(
columns::RESULT,
columns::limb_bit_cols_for_input(columns::INPUT0),
columns::limb_bit_cols_for_input(columns::INPUT1),
) {
let x: P = limb_from_bits_le(x_bits_cols.clone().map(|col| lv[col]));
let y: P = limb_from_bits_le(y_bits_cols.clone().map(|col| lv[col]));
let x_op_y = sum_coeff * (x + y) + and_coeff * (x & y) + not_coeff * FE::from_canonical_u32(u32::MAX);
yield_constr.constraint(lv[result_col] - x_op_y);
}
}
This part follows from earlier documentation in this series.
Recursive circuits are used to efficiently verify STARK proofs:
fn eval_ext_circuit(
&self,
builder: &mut CircuitBuilder<F, D>,
vars: &Self::EvaluationFrameTarget,
yield_constr: &mut RecursiveConstraintConsumer<F, D>,
)