Each month we publish a summary of all the news, announcements, launches, updates and more from EverX.
Our July update is nothing short of informative as we touch base with various teams within EverX. During the past couple weeks, members from the Node, Toolchain, Flex and Drivechain teams all stood up to give updates.
The Toolchain team has been busy ensuring security matters are handled efficiently without potential oversights and managing various software tools which work together for the purpose of optimized programming processes. They’ve recently tackled finalizing the EVM upgrade of C++ compiler while actively researching EVM in a controlled environment with productive results. The team has also streamlined a debug interface making it more convenient for call functions as well as debugging, not to mention fixing high profile bugs submitted by users along the way.
SE (Startup Edition)
The SE team recently refactored the structure of the Evernode SE (startup edition development) code to increase future improvements and maintenance.
- They’ve fully removed the PoA consensus
- Simplified the block producing timeline
- Previously there were three separate projects in rust technology which they’ve combined into one single project (crate).
The team plans to improve Evernode SE components this year and will be adding several features along the way as it’s useful for contract developers.
Wen Flex, right?
Well, ladies and germs, have we got updates for you!
The Flex team has been quiet lately–and for good reason. There are numerous undertakings from the tiniest details in UI to the anticipation, planning and resolutions of potential bottlenecks.
Recovery of gas balances
The gas balances have become a discussion point as a few beta testers have reported their misunderstandings of this feature. The team has taken it to heart and will be implementing clearer instructions.
In the image of the user interface above, we can see there is a balance available on the main gas wallet which is used as auto-replenishment for all token wallets. All wallets (EVER, TBTC, TETH, TSDC) have native balances. People will not only deposit tokens in native EVERs but they will have the ability to withdraw partial or all tokens via the user interface and gas balances. Balances can also be recovered from a dedicated DeBot of the Flex client–this functionality is not yet released but is under development.
The team is also working on making the user interface up to par with modern standards, taking inspiration from Binance and other top exchanges. They’ve also added an additional field of ‘total’ where the total amount of the current deal is shown. One way to make use of this is if we take the pair of Bitcoin and Ever, and for example buy BTC at the price of 140k EVERs, we can use the slider to quickly choose to use 50% of our EVERs to buy BTC, and buy that much of BTC at that price, or if we want to use 75% of our EVERs, the price will change accordingly. The user will of course be able to manually enter exactly how much they want to buy.
Stress testing of price contracts
The Flex team has determined a current limit of maximum 244 orders in one price contract. It’s an interesting statistic and will decide how to increase or deal with the limit at a later stage and will include changes to either the contract or debot level. The team needs to fully understand what trades-per-second value they can provide now on the current infrastructure.
Further updates from the team in brief:
- Testing limit orders functionality that replaces the current spot orders;
- Working on the Desktop Electron version for Mac, Linux and Windows to provide an expected level of security for such Dapp as DEX;
- Testing Flex management contracts (approval of releases and distribution of fees);
- Discussing further Flex Roadmap with nearest plans of releasing 1–2–3 to public beta on Devnet first and then to Mainnet (dates to be confirmed).
The current status of developments for the Node team has been focused on integrating all changes they’ve previously done in months prior, such as functionality and critical bug changes, which have not yet been deployed on the mainnet. All related issues were investigated and fixed within a few days.
The node team is set to begin undergoing a complicated process of releasing the VM and transaction executor to the public repository in order for the SDK team to release a certain feature.
The Node team held an informative presentation on DriveChain simulator that was developed over the past couple months. Drivechain is a decentralized storage that the team plans on implementing based on our blockchain in the future. The main idea is to store large files on DriveChain and use the validators for storing data. We plan to have basic properties like privacy integrity and public verifiability, which means after a validator saves a file the other validators in the network may verify that this file is stored correctly at any given point of time until the expiration of the file. Validators storing the file should provide the proof to the network that they stored the file correctly.
Let’s say our client has a large file, but the client doesn’t have resources to store it, so they want to save the file on DriveChain and expect the basic operations (read, write and delete).
When the client needs to write an operation we’ll deploy the file index smart contract in DriveChain and the client provides the hash of the file, the size and the public key (which will be necessary to prove client identity in the future).
This file index smart contract stores all initial data about the file and is assigned to a shard as seen in the diagram above. After deploying the smart contract and assigning it to the shard, we prepare the object code file connector which is saved and contains data about the shard. The address of the collator will be provided to the client, the client sends his file directly to the collator where other validators in the shard will be obliged to download the file from the collator.
At the end of the process:
- All validators will have the same copy of the storage and this will increase the resilience of the data in DriveChain
- Validators make transformations with the file: validators split the file into a sequence of smaller data chunks. Each have a fixed size.
- Each validator computes a Merkle tree (hash tree) using the splitting into chunks. At the end of the computation the client gets the final hash of the Merkle tree.
- After all validators finish downloading the data, we see the Merkle root smart contract and can assume that file is written into a shard where it is ready to be read by the user
- File is stored only by the shard and other shards don’t have the file–only public information is stored on file
- Proof of storage is necessary to provide public verifiability of stored data. The current collator in a shard choses a random file, then chooses a random chunk from the file and prepares a new block. This block has the following structure: hash of file, sequence number of the chunk and the Merkle root; all data is added to the block and this is added to the network to all nodes in DriveChain.
- This block is validated first by validators of the shard as well as by verifiers. Anyone will be able to verify if a chunk is correct because we added the Merkle proof of inclusion.
- This process continues indefinitely until the expiration of the file.
When a client initializes a read operation the following scheme ensues:
- Client initiates read operation. S/he doesn’t read the file from shard 0 which stores the file, instead we initiate the process of choosing relayers, layers belonging to other shards, chosen randomly. The process is similar to how verifiers are chosen in the new consensus: they compute themselves that they are relayers. For example, when another shard determines if it is a relayer, it goes to the shard and asks for the file, which is an initial verification. If all is ok then the relayers send the file to the client. In the simulator this is just random. In reality this will function similar to a bittorrent protocol to improve performance.
In the screenshot above, we see the DriveChain simulator server, which currently has 3 shards, 7 validators in each shard and a current chunk size of 4 bytes. There are 24 bytes in each shard available for storing files. For now there is no data in the Drivechain and it will be producing empty blocks after a delay, but for demonstration purposes we will give it data and start by writing an operation which creates a random file. This file smart contract will be deployed and after that it will be uploaded onto the validators.
Creating a file
WRITE_INITis the first operation, which created a smart contract and assigned it to shard 0. Now we can start uploading the file to DriveChain where the validators will also receive the file.
In the end we saved the Merkle root for the file in the smart contract, and from that moment validators of shard 0 start generating blocks. In the concluding message you can see the file hash and the chunk id.
We created a very small file of 8 bytes so there are only 2 chunks. The Merkle proof for a simple case is only one hash in Merkle proof. After that the block is transmitted to all validators and is verified by validators of the shard and verifiers. This is an infinite process.
To execute a read operation, we’ve entered the parameters hash of the file and initialized the process of finding the layers and loading data from shard to relayers.
There are many nodes from other shards that determine they are relayers and they download the data from the shard.
Then we can see the client got the file.
Deleting the file
The final step in the simulator demo was deleting the file.
Executing the deletion operation is simple and we can determine success by calculating the results in free volume. If one tries to read a deleted file, they get the error that the file cannot be read.
Zackery Li from the Everscale Chinese community came on the Everscale Community call to give an update on our partnership with the BSN (Blockchain-based Service Network) which is a cross-cloud, cross-portal, and cross-framework global public infrastructure network used to develop, deploy, and maintain all types of blockchain distributed applications (DApps) and is now implementing a new product called DDC integration. DDC is a decentralized digital certificate, which is basically an equivalent of NFT. Currently, NFT technology is only allowed in China as long as it’s not associated with cryptocurrencies, which is why the BSN integrated a new BSN-DDC Network built using OPBs (Open Permission Blockchain) as the underlying frameworks, allowing permissioned control over node deployment and utility fees in fiat currencies than cryptocurrencies.
Zackery informed the community, “the BSN has a team dedicated to sign deals with IPs and Brands willing to issue NFTs, and introduce them to select Everscale instead of other chains due to our better TPS.”
The BSN is currently working on the integration of TIP 4.1 with the current DDC specification with EverX team’s support and Everscale community’s help. It’s planned to be ready by the end of July. Once it starts functioning, BSN plans to attract top brands which will mint NFTs (DDCs), and as a result it will demonstrate the throughput and TPS.
It’s impossible to provide high quality products for free so the team has decided to structure the Evernode platform for the philosophy of the company. This can be applied to each team, such as the SDK and Cloud teams who both have a number of clients. The community is also considered a client, but as ‘community’ is one general word, EverX has decided to keep the service free for the community, offer limited support, and ensure there is a foundation in place which pays for the development.
For businesses it’s a clear case: if they want quality service they will contact EverX directly and we will create a dedicated cloud for them.
Challenges and resolutions
In the beginning of July the development team experienced a pipeline issue due to a windows builder which had failed for several days. All the pipelines were stuck at the builder so they ordered another windows builder so it will not happen again, along with each product having its own pipeline.
As Murphy’s law would have it, the team faced another issue, this one being with API degradation. This experience occurs more often when a community member executes heavy queries, resulting in Surf performance degradation. This led the team to decide on a quick solution–to change the architecture and deploy a dedicated cloud for Surf. This buys the team more time to think about how to improve it without Surf experiencing any degradation performance.
✅ This sums up the last few weeks at EverX! Be sure to follow our accounts and stay up to date:
🤝 Interested in joining a brilliant team of talented developers? Check out our open positions here.