The first version of the Light Ethereum Sub-Protocol (Les / 1) and its implementation in Geth are always in A experimental step, but they should reach a plus mature State in a few months when basic functions will operate reliably. The light customer has been designed to operate more or less as a complete customer, but “lightness” has limits inherent in DAPP developers should understand and Consider when designing their applications.
In most cases, a properly designed application can work even without knowing what type of customer it is connected, but we plan to add an API extension to communicate different customer capacities in order to provide a future evidence interface. Although the minor details of them are always being developed, I think it’s time to clarify the most important differences between full and light customers from the point of view of the Applications Developer.
Current limitations
Pending transactions
Light customers do not receive transactions awaiting the main Ethereum network. The only transactions waiting for a light customer to know are those who have been created and sent from this customer. When a light customer sends a transaction, he begins to download whole blocks until he finds the transaction sent in one of the blocks, then deletes it from the pending transaction set.
Find a hash transaction
Currently, you can only find transactions created locally by hash. These transactions and their inclusion blocks are stored in the database and can be found by chopping later. Finding other transactions is a little more delicate. It is possible (but not implemented to download it from a server and check that the transaction is really included in the block if the server has found it. Unfortunately, if the server says that the transaction does not exist, it is not possible for the customer to check the validity of this answer. It is possible to ask for several servers in case the first did not know, but the customer can never be absolutely sure of the non-existence Given transaction.
Performance considerations
Ask for latency
The only thing that a light customer always has in their database is the last thousand block headers. This means that the recovery of anything else obliges the customer to send requests and obtain answers from light servers. The light customer tries to optimize demand distribution and collect statistical data from the usual response times of each server to reduce latency. Latence is the key performance parameter of a light customer. It is generally found in the order of magnitude of 100-200 ms, and it applies to the recovery of the reading, the block and the reception of the contract storage. If many requests are made sequentially to carry out an operation, this can cause a slow response time for the user. The functions of the parallel execution API as far as possible can considerably improve performance.
Search for events in a long story of blocks
Complete customers use a so -called “mapped” flowering filter to quickly find events in a long list of blocks so that it is reasonably cheap to search for certain events in all block history. Unfortunately, the use of a MIP mapping filter is not easy to do with a light customer, because research is only carried out in individual headers, which is much slower. The search for a few days of the block history generally returns after an acceptable time, but For the moment, you shouldn’t seek anything in the whole story because it will take a long time.
Memory, disc and bandwidth requirements
Here is the good news: a light customer does not need a large database because it can recover anything on demand. With the collection of activated garbage (which must be implemented), The database will work more like a cache, and a light customer can work with As low as 10 MB of storage space. Note that Geth’s current implementation uses around 200 MB of memoryWhich can probably be still reduced. Standing requirements are also lower when the customer is not used strongly. The bandwidth used is generally well under 1 MB / hour when executing the idle, with an additional 2 to 3 kb for an average storage request / storage.
Future improvements
Reduce overall latency by remote execution
Sometimes it is not necessary to transmit data several times between the customer and the server in order to assess a function. It would be possible to perform functions on the server side, then collect all the proof Merkle proving all the status of the access to the access and returns all the evidence both so that the customer can re-turn the code and check the evidence. This method can be used both for the reading functions alone of the contracts as well as for any specific code code that works on the blockchain / state as input.
Verification of indirectly complex calculations
One of the main limits we work on to improve is the slow research speed of newspaper stories. Many of the limitations mentioned above, including the difficulty of obtaining bloom filters mapped by MIP, follow the same model: the server (which is a complete node) can easily calculate a certain information, which can be shared with light customers. But light customers currently have no practical means of verifying the validity of this information, because the verification of the entire calculation of the results would require so much treatment and bandwidth power, which would make a light customer unnecessary.
Fortunately, there is a safe solution without confidence to the general stain indirectly validation of remote calculations based on an input data set that the two parties suppose to be available, even if the receiving part does not have the real data, only its hatching. This is exactly the case In our scenario where Ethereum blockchain itself can be used as a starter for such an verified calculation. This means that it is possible for light customers to have capacities close to those of complete nodes, as they can ask a light server to remotely assess an operation for them that they would not be able to operate otherwise. The details of this functionality are still being developed and are outside the scope of this document, but the general idea of the verification method is explained by Dr. Christian Reitwiessner in this Devcon 2 talk.
Complex applications accessing huge quantities of contract storage can also benefit from this approach by fully assessing accessory functions on the server side and not having to download evidence and re -evaluate the functions. Theoretically, it would also be possible to use an indirect verification to filter events that light customers could not monitor otherwise. However, in most cases, generating appropriate newspapers is even simpler and more efficient.