Storm is a simple and powerful toolkit for BoltDB. Basically, Storm provides indexes, a wide range of methods to store and fetch data, an advanced query system, and much more.
In addition to the examples below, see also the examples in the GoDoc.
For extended queries and support for Badger, see also Genji
- Getting Started
- Import Storm
- Open a database
- Simple CRUD system
- Nodes and nested buckets
- Simple Key/Value store
- BoltDB
- License
- Credits
GO111MODULE=on go get -u github.com/asdine/storm/v3
import "github.com/asdine/storm/v3"
Quick way of opening a database
db, err := storm.Open("my.db")
defer db.Close()
Open
can receive multiple options to customize the way it behaves. See Options below
type User struct {
ID int // primary key
Group string `storm:"index"` // this field will be indexed
Email string `storm:"unique"` // this field will be indexed with a unique constraint
Name string // this field will not be indexed
Age int `storm:"index"`
}
The primary key can be of any type as long as it is not a zero value. Storm will search for the tag id
, if not present Storm will search for a field named ID
.
type User struct {
ThePrimaryKey string `storm:"id"`// primary key
Group string `storm:"index"` // this field will be indexed
Email string `storm:"unique"` // this field will be indexed with a unique constraint
Name string // this field will not be indexed
}
Storm handles tags in nested structures with the inline
tag
type Base struct {
Ident bson.ObjectId `storm:"id"`
}
type User struct {
Base `storm:"inline"`
Group string `storm:"index"`
Email string `storm:"unique"`
Name string
CreatedAt time.Time `storm:"index"`
}
user := User{
ID: 10,
Group: "staff",
Email: "john@provider.com",
Name: "John",
Age: 21,
CreatedAt: time.Now(),
}
err := db.Save(&user)
// err == nil
user.ID++
err = db.Save(&user)
// err == storm.ErrAlreadyExists
That's it.
Save
creates or updates all the required indexes and buckets, checks the unique constraints and saves the object to the store.
Storm can auto increment integer values so you don't have to worry about that when saving your objects. Also, the new value is automatically inserted in your field.
type Product struct {
Pk int `storm:"id,increment"` // primary key with auto increment
Name string
IntegerField uint64 `storm:"increment"`
IndexedIntegerField uint32 `storm:"index,increment"`
UniqueIntegerField int16 `storm:"unique,increment=100"` // the starting value can be set
}
p := Product{Name: "Vaccum Cleaner"}
fmt.Println(p.Pk)
fmt.Println(p.IntegerField)
fmt.Println(p.IndexedIntegerField)
fmt.Println(p.UniqueIntegerField)
// 0
// 0
// 0
// 0
_ = db.Save(&p)
fmt.Println(p.Pk)
fmt.Println(p.IntegerField)
fmt.Println(p.IndexedIntegerField)
fmt.Println(p.UniqueIntegerField)
// 1
// 1
// 1
// 100
Any object can be fetched, indexed or not. Storm uses indexes when available, otherwise it uses the query system.
var user User
err := db.One("Email", "john@provider.com", &user)
// err == nil
err = db.One("Name", "John", &user)
// err == nil
err = db.One("Name", "Jack", &user)
// err == storm.ErrNotFound
var users []User
err := db.Find("Group", "staff", &users)
var users []User
err := db.All(&users)
var users []User
err := db.AllByIndex("CreatedAt", &users)
var users []User
err := db.Range("Age", 10, 21, &users)
var users []User
err := db.Prefix("Name", "Jo", &users)
var users []User
err := db.Find("Group", "staff", &users, storm.Skip(10))
err = db.Find("Group", "staff", &users, storm.Limit(10))
err = db.Find("Group", "staff", &users, storm.Reverse())
err = db.Find("Group", "staff", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.All(&users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.AllByIndex("CreatedAt", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.Range("Age", 10, 21, &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err := db.DeleteStruct(&user)
// Update multiple fields
// Only works for non zero-value fields (e.g. Name can not be "", Age can not be 0)
err := db.Update(&User{ID: 10, Name: "Jack", Age: 45})
// Update a single field
// Also works for zero-value fields (0, false, "", ...)
err := db.UpdateField(&User{ID: 10}, "Age", 0)
err := db.Init(&User{})
Useful when starting your application
Using the struct
err := db.Drop(&User)
Using the bucket name
err := db.Drop("User")
err := db.ReIndex(&User{})
Useful when the structure has changed
For more complex queries, you can use the Select
method.
Select
takes any number of Matcher
from the q
package.
Here are some common Matchers:
// Equality
q.Eq("Name", John)
// Strictly greater than
q.Gt("Age", 7)
// Lesser than or equal to
q.Lte("Age", 77)
// Regex with name that starts with the letter D
q.Re("Name", "^D")
// In the given slice of values
q.In("Group", []string{"Staff", "Admin"})
// Comparing fields
q.EqF("FieldName", "SecondFieldName")
q.LtF("FieldName", "SecondFieldName")
q.GtF("FieldName", "SecondFieldName")
q.LteF("FieldName", "SecondFieldName")
q.GteF("FieldName", "SecondFieldName")
Matchers can also be combined with And
, Or
and Not
:
// Match if all match
q.And(
q.Gt("Age", 7),
q.Re("Name", "^D")
)
// Match if one matches
q.Or(
q.Re("Name", "^A"),
q.Not(
q.Re("Name", "^B")
),
q.Re("Name", "^C"),
q.In("Group", []string{"Staff", "Admin"}),
q.And(
q.StrictEq("Password", []byte(password)),
q.Eq("Registered", true)
)
)
You can find the complete list in the documentation.
Select
takes any number of matchers and wraps them into a q.And()
so it's not necessary to specify it. It returns a Query
type.
query := db.Select(q.Gte("Age", 7), q.Lte("Age", 77))
The Query
type contains methods to filter and order the records.
// Limit
query = query.Limit(10)
// Skip
query = query.Skip(20)
// Calls can also be chained
query = query.Limit(10).Skip(20).OrderBy("Age").Reverse()
But also to specify how to fetch them.
var users []User
err = query.Find(&users)
var user User
err = query.First(&user)
Examples with Select
:
// Find all users with an ID between 10 and 100
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Find(&users)
// Nested matchers
err = db.Select(q.Or(
q.Gt("ID", 50),
q.Lt("Age", 21),
q.And(
q.Eq("Group", "admin"),
q.Gte("Age", 21),
),
)).Find(&users)
query := db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name")
// Find multiple records
err = query.Find(&users)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").Find(&users)
// Find first record
err = query.First(&user)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").First(&user)
// Delete all matching records
err = query.Delete(new(User))
// Fetching records one by one (useful when the bucket contains a lot of records)
query = db.Select(q.Gte("ID", 10),q.Lte("ID", 100)).OrderBy("Age", "Name")
err = query.Each(new(User), func(record interface{}) error) {
u := record.(*User)
...
return nil
}
See the documentation for a complete list of methods.
tx, err := db.Begin(true)
if err != nil {
return err
}
defer tx.Rollback()
accountA.Amount -= 100
accountB.Amount += 100
err = tx.Save(accountA)
if err != nil {
return err
}
err = tx.Save(accountB)
if err != nil {
return err
}
return tx.Commit()
Storm options are functions that can be passed when constructing you Storm instance. You can pass it any number of options.
By default, Storm opens a database with the mode 0600
and a timeout of one second.
You can change this behavior by using BoltOptions
db, err := storm.Open("my.db", storm.BoltOptions(0600, &bolt.Options{Timeout: 1 * time.Second}))
To store the data in BoltDB, Storm marshals it in JSON by default. If you wish to change this behavior you can pass a codec that implements codec.MarshalUnmarshaler
via the storm.Codec
option:
db := storm.Open("my.db", storm.Codec(myCodec))
You can easily implement your own MarshalUnmarshaler
, but Storm comes with built-in support for JSON (default), GOB, Sereal, Protocol Buffers and MessagePack.
These can be used by importing the relevant package and use that codec to configure Storm. The example below shows all variants (without proper error handling):
import (
"github.com/asdine/storm/v3"
"github.com/asdine/storm/v3/codec/gob"
"github.com/asdine/storm/v3/codec/json"
"github.com/asdine/storm/v3/codec/sereal"
"github.com/asdine/storm/v3/codec/protobuf"
"github.com/asdine/storm/v3/codec/msgpack"
)
var gobDb, _ = storm.Open("gob.db", storm.Codec(gob.Codec))
var jsonDb, _ = storm.Open("json.db", storm.Codec(json.Codec))
var serealDb, _ = storm.Open("sereal.db", storm.Codec(sereal.Codec))
var protobufDb, _ = storm.Open("protobuf.db", storm.Codec(protobuf.Codec))
var msgpackDb, _ = storm.Open("msgpack.db", storm.Codec(msgpack.Codec))
Tip: Adding Storm tags to generated Protobuf files can be tricky. A good solution is to use this tool to inject the tags during the compilation.
You can use an existing connection and pass it to Storm
bDB, _ := bolt.Open(filepath.Join(dir, "bolt.db"), 0600, &bolt.Options{Timeout: 10 * time.Second})
db := storm.Open("my.db", storm.UseDB(bDB))
Batch mode can be enabled to speed up concurrent writes (see Batch read-write transactions)
db := storm.Open("my.db", storm.Batch())
Storm takes advantage of BoltDB nested buckets feature by using storm.Node
.
A storm.Node
is the underlying object used by storm.DB
to manipulate a bucket.
To create a nested bucket and use the same API as storm.DB
, you can use the DB.From
method.
repo := db.From("repo")
err := repo.Save(&Issue{
Title: "I want more features",
Author: user.ID,
})
err = repo.Save(newRelease("0.10"))
var issues []Issue
err = repo.Find("Author", user.ID, &issues)
var release Release
err = repo.One("Tag", "0.10", &release)
You can also chain the nodes to create a hierarchy
chars := db.From("characters")
heroes := chars.From("heroes")
enemies := chars.From("enemies")
items := db.From("items")
potions := items.From("consumables").From("medicine").From("potions")
You can even pass the entire hierarchy as arguments to From
:
privateNotes := db.From("notes", "private")
workNotes := db.From("notes", "work")
A Node can also be configured. Activating an option on a Node creates a copy, so a Node is always thread-safe.
n := db.From("my-node")
Give a bolt.Tx transaction to the Node
n = n.WithTransaction(tx)
Enable batch mode
n = n.WithBatch(true)
Use a Codec
n = n.WithCodec(gob.Codec)
Storm can be used as a simple, robust, key/value store that can store anything. The key and the value can be of any type as long as the key is not a zero value.
Saving data :
db.Set("logs", time.Now(), "I'm eating my breakfast man")
db.Set("sessions", bson.NewObjectId(), &someUser)
db.Set("weird storage", "754-3010", map[string]interface{}{
"hair": "blonde",
"likes": []string{"cheese", "star wars"},
})
Fetching data :
user := User{}
db.Get("sessions", someObjectId, &user)
var details map[string]interface{}
db.Get("weird storage", "754-3010", &details)
db.Get("sessions", someObjectId, &details)
Deleting data :
db.Delete("sessions", someObjectId)
db.Delete("weird storage", "754-3010")
You can find other useful methods in the documentation.
BoltDB is still easily accessible and can be used as usual
db.Bolt.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket([]byte("my bucket"))
val := bucket.Get([]byte("any id"))
fmt.Println(string(val))
return nil
})
A transaction can be also be passed to Storm
db.Bolt.Update(func(tx *bolt.Tx) error {
...
dbx := db.WithTransaction(tx)
err = dbx.Save(&user)
...
return nil
})
MIT