Arrays
Arrays for Concurrent Operations
In Solidity’s native arrays, concurrent appends (e.g., via push()
) lead to write–write conflicts when multiple transactions attempt to add elements at the same time.
Arcology's concurrent arrays are lockless, thread-safe and deterministic data structures handle concurrent accesses. They allow multiple threads or processes to read or modify the elements of the array concurrently while ensuring state consistency and data integrity.
Arcology’s Concurrent Arrays:
Buffer updates during execution
Apply changes only after conflict checks
This enables thread-safe array operations with deterministic results.
Supported Data Types
Currently, the containers support a number of basic data types:
Address
Bool
Bytes
Bytes32
Int
String
U256
Examples
The code example illustrates demonstrates how this concurrent data structure can be effectively used for parallel execution.
// SPDX-License-Identifier: GPL-3.0
pragma solidity >=0.8.0 <0.9.0;
import "@arcologynetwork/concurrentlib/lib/array/Bool.sol";
contract BoolArray {
// The concurrent Bool array is used to store boolean values in a thread-safe and
// deterministic manner.
Bool boolContainer = new Bool();
// Append safely in parallel, no conflicts. This function is thread-safe
function add() {
boolContainer.push(true);
}
// This function is also thread-safe when called in parallel alone but will conflict with add()
// if they are called in parallel together.
function length() public view returns(uint256){
return boolContainer.fullLength();
}
}
Parallel Push Only
In the diagram below, two users, Alice and Bob, each submit a transaction (TX 0 and TX 1) that calls the same function push()
, which pushes a Boolean value of true
to a Boolean concurrent container.

These transactions are executed in parallel using Arcology's concurrency model.
Both push(true)
operations are commutative — the order of execution does not affect the final result. Whether Alice’s push comes before Bob’s or vice versa, the outcome is the same: [true, true]
.
During the conflict detection phase, the system verifies that there is no data overlap or ordering dependency between these operations.
Because no conflict is detected, both transactions proceed directly to the commit phase, where their changes are merged deterministically into the final state.
The result is a successfully updated boolContainer = [true, true]
.
Parallel Push Only with Getting Length
Now let us consider the flowing case. Alice and Bob each submit a transaction calling like()
, and Charlie submits a transaction that calls length()
which is getting the full length of the container. All three transactions are processed in full parallel.

Transactions calling push()
are commutative, meaning their execution order does not affect the final state.
Charlie’s transaction calls fullLength()
, which is a read operation that returns the full length of the array. However, the value it returns depends on the order in which the transactions are processed.
If Charlie’s transaction executes before TX0 and TX1, it sees length
0
.If it executes between TX0 and TX1, it sees length
1
.If it executes after both, it sees length
2
.
Since fullLength()
produces different outcomes based on its position relative to the other transactions, it violates the commutativity rule, which requires that the final result remain the same regardless of execution order.
To preserve commutativity and ensure deterministic execution, the system must choose between two options:
Revert TX0 and TX1 and keep TX2.
Keep TX0 and TX1 and revert TX2.
The system will choose the latter, as it is cheaper and less disruptive.
TX0 and TX1 will succeed and boolContainer = [true, true]
.
Schedule learns the conflict pattern and use the knowledge to avoid similar issues from happening again in the future.
Note: the scheduler learns from this conflict pattern. In future blocks, it will automatically place transactions that call non-commutative functions into separate generations to prevent the same conflict from reoccurring.
Last updated