2x faster and statically typed binary encoder/decoder https://howl.moe/binary
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Morgan Bazalgette 3607cc3f77 add read benchmarks 2 months ago
.travis.yml Add travis 4 years ago
LICENSE Initial commit 4 years ago
README.md performance improvements on writer 4 months ago
binary.go performance improvements on writer 4 months ago
read_float.go ReadChain is an useless clusterfuck. Change from ReadChain to Reader 4 years ago
read_int.go ReadChain is an useless clusterfuck. Change from ReadChain to Reader 4 years ago
read_test.go add read benchmarks 2 months ago
read_uint.go Use [8]byte in the struct instead of pool 2 years ago
write_float.go Initial commit 4 years ago
write_int.go Initial commit 4 years ago
write_test.go add read benchmarks 2 months ago
write_uint.go performance improvements on writer 4 months ago

README.md

GoDoc Build Status

binary

A faster binary encoder.

go get howl.moe/binary

Migrating to version 2

All slice-related methods have been removed - this is because they allocated their own slices (for reads). The way slices are implemented in binary protocols is often arbitrary, some specify the length, some only in certain cases, some use varints, others use different widths for the length. For this reason, we removed the methods.

ByteSlice and String have been kept, since those are even used internally.

There is no way to read a string directly like there was in the previous version - this is because of the aforementioned variability in implementation about encoding the length. You can still use the newly-added Read method (implements io.Reader) passing a byte slice with the length desired.

Why you should

The biggest, yet simplest, change was by removing all slice allocations. This is a common trick used often, in places such as fasthttp and nanojson (yes, that's what we call shameless advertising.)

Previously, every single tiny read and write allocated a byte slice. This is actually quite expensive - it is a heap allocation, which needs to be tracked by the garbage collector, and so on. Writes are now buffered in an internal 512-byte array, and by being able to encode binary data straight in that array we reach big performance boosts.

$ git checkout v1
$ go test -bench=.
go test -bench=. -benchmem
goos: linux
goarch: amd64
pkg: github.com/thehowl/binary
BenchmarkWriteSmall-4    	37062871	        27.2 ns/op	       1 B/op	       1 allocs/op
BenchmarkWriteMedium-4   	 5283705	       210 ns/op	      40 B/op	       5 allocs/op
BenchmarkWriteLong-4     	  849973	      1417 ns/op	     240 B/op	      12 allocs/op
PASS
ok  	github.com/thehowl/binary	4.592s

$ git checkout master
$ go test -bench=. -benchmem
goos: linux
goarch: amd64
pkg: howl.moe/binary
BenchmarkWriteSmall-4                 	60495008	        17.9 ns/op	       0 B/op	       0 allocs/op
BenchmarkWriteSmallEncodingBinary-4   	40247256	        29.3 ns/op	       1 B/op	       1 allocs/op
BenchmarkWriteMedium-4                	19292994	        52.6 ns/op	       0 B/op	       0 allocs/op
BenchmarkWriteLong-4                  	11028130	       104 ns/op	       0 B/op	       0 allocs/op
BenchmarkWriteLongEncodingBinary-4    	 5126353	       256 ns/op	      96 B/op	       3 allocs/op
PASS
ok  	howl.moe/binary	7.164s

As you can see, for writes of large chunks of data, this can be up to a 14x improvement from the previous version. As it turns out, the allocations were so expensive that the package was actually slower than encoding/binary. Not anymore: you can now see that this package is 2x faster on long writes than encoding/binary.