WeGotcha ecosystem is capable of handling 5 Uber-scale applications simultaneously. In 2018, Uber’s 75M customers and 3M drivers complete 15M trips daily [https://www.uber.com/newsroom/company-info/].
Considering the following highly simplified execution steps for every Uber ride:
There are about 15M x 9 = 135M executions happening daily, which is about 1600 executions per second. Taking about 25% buffer executions for events like peak demand, executing promotional codes, resolving disputes, etc - we can assume that if Uber was operating on Blockchain today, it’d require an average of 2000 transactions per second.
To achieve this industry-leading performance, WeGotcha has borrowed findings and learnings from the Steem network [https://steem.io/steem-whitepaper.pdf]. Among these lessons are the following key points:
Keep everything in memory.
Keep the core business logic in a single thread.
Keep cryptographic operations (hashes and signatures) out of the core business logic.
Divide validation into state-dependent and state-independent checks.
Use an object oriented data model.
By following these simple rules, WeGotcha is able to process 10,000 transactions per second without any significant effort devoted to optimization. Keeping everything in memory is increasingly viable given the recent introduction of Optane™ technology from Intel [https://newsroom.intel.com/press-kits/introducing-intel-optane-technology-bringing-3d-xpoint-memory-to-storage-andmemory-products/]. It should be possible for commodity hardware to handle all of the business logic associated with WeGotcha in a single thread, with all posts kept in memory for rapid indexing. Even Google keeps their index of the entire internet in RAM. The use of blockchain technology makes it trivial to replicate the database to many machines to prevent loss of data. As Optane™ technology takes over, RAM will become even faster while gaining persistence. In other words, WeGotcha is designed for the architectures of the future and is designed to scale.