OVO Key/Value Storage

Yet another Key/Value Storage ?

I started developing OVO, my own implementation of a Key/Value Storage, six month ago because I wanted to test developing a project using the Go language. In the past, I had already worked on a similar project, GetCache, but it is developed with .Net framework.

I don't think that the world lacks the Key/Value Storage, but for me it is treated to a great gym to train my Go. I have r
ealized that this project with the Go language is extraordinarily simple and so I would like to share this experience.

What is OVO

Initially, the project aimed to create a distributed cache, like Memcached, that would allow to store objects and JSON structures quickly by spreading the load across multiple servers, and could run on any kind of Linux, Windows, iOS machine. Each cached object can have a time-to-live property (TTL) set so that the removal is done automatically when it expires.

Then I realized that I needed to get a high reliability service introducing a mechanism of data replication: I decided that I would use asynchronous replication of data. Cluster nodes would communicate using RPC and data replication commands are queued on Go channels, that allows me to manage the communications between processes easily. I did not want to introduce differences between nodes, there are no Masters and Slaves nodes, they are all Peers.

I finally added some typical features of Key/Value Storage systems expanding the possibilities of OVO:
  • Update a value if it is not changed
  • Delete a value if it is not changed
  • Get and remove
  • Manage atomic counters
At the beginning, the project uses widely goroutines and channels and function closures to manage concurrent access to the storage and to perform atomic transactions. All the operations are queued in a command's channel so that the state of storage remains consistent.
For example the operation "Delete a value if it is not changed" was initially developed in this way:
// Delete an item if the value is not changed.
func (coll *InMemoryCollection) DeleteValueIfEqual(obj *storage.MetaDataObj) bool {
    retChan := make(chan bool)
    defer close(retChan)
    coll.commands <- func() {
        if ret, ok := coll.storage[obj.Key]; ok {
            if bytes.Equal(ret.Data, obj.Data) {
                delete(coll.storage, obj.Key)
                retChan <- true
            } else {
                retChan <- false // values are not equal
            }
        } else {
            retChan <- true // already deleted
        }
    }
    return <-retChan //wait for result
}
After having a discussion on Golang Reddit about the use of Mutex (thanks to that guys), I make another version of storage that make use of sysc.RWMutex.
I wanted to see what's the performance difference between an implementation that adopts only gorutines and channels and another that is using the Mutex.
// Delete an item if the value is not changed.
func (coll *InMemoryMutexCollection) DeleteValueIfEqual(obj *storage.MetaDataObj) bool {
    coll.Lock()
    defer coll.Unlock()
    if ret, ok := coll.storage[obj.Key]; ok {
        if bytes.Equal(ret.Data, obj.Data) {
            delete(coll.storage, obj.Key)
            return true
        } else {
            return false // values are not equal
        }
    } else {
        return true
    }
}
The use of a Mutex appears on average two and a half times faster than the use of  gorutines and channels. This large difference has led me to choose the latter.
Even if the first method is slower, an interesting result that has adopting routines and channels is the linear response of the system. Performing tests with 10000, 100000, 1000000 gorutines we get a linear growth in response times and this honor goes to the creators of Go.

Client libraries

The OVO server is accessible through RESTful API so that anyone can develop client libraries for all the languages and platforms
Until now I have been developed clients in the languages Go and C#, while the Java client is in development.
The client libraries must implement the sharding mechanism by which the data are distributed on the nodes of a OVO cluster. The distribution of data on the nodes takes place by means of a deterministic calculation of the hashcode of the key. 
The cluster of OVO nodes partitions the range of hashcode using a simple distribution policy.


What I appreciated all this

Many of the aspects of the programming language Go make it easy and pleasant the realization of distributed applications:
  • manage concurrent accesses easy
  • quick implementation of RESTful APIs midleware
  • RPC communications
  • make simple what appears complex
The obtained result is the OVO server has very good performance, especially in writing. It is easy to install and takes up few resources, it is compiled into an executable of only 10 MB.


References

OVO server and clients are all open source software and they are under MIT license.

OVO Key/Value Storage repository
https://github.com/maxzerbini/ovo

OVO Go client library repository
https://github.com/maxzerbini/ovoclient

OVO .Net client library repository
https://github.com/maxzerbini/ovodotnet


Commenti

Post popolari in questo blog

OAuth 2.0 Server & Authorization Middleware for Gin-Gonic

From the database schema to RESTful API with DinGo

Go text and HTML templates