这篇文章想聊聊如何给gc
减负的问题,也即我们在写业务的时候,有时候需要考虑下gc
老人家的感受,但又不能丧失代码的可读性,有些情况下代码需不需要优化,优化后能取得多大的性能提升,其中的平衡需要把握。过早的优化是万恶的开始
- 字符串拼接,demo代码:
package main
import (
"bytes"
)
func f1(l int) {
var s, s1 string = ``, `hello world`
for i := 0; i < l; i++ {
s = s + s1
}
}
func f2(l int) {
buf := bytes.NewBuffer([]byte{})
var s1 string = `hello world`
for i := 0; i < l; i++ {
buf.WriteString(s1)
}
}
func f3(l int) {
var s []string
var s1 string = `hello world`
for i := 0; i < l; i++ {
s = append(s, s1)
}
strings.Join(s, ``)
}
测试代码:
package main
import (
"testing"
)
func Benchmark_F1(b *testing.B) {
for i := 0; i < b.N; i++ {
f1(100000)
}
}
func Benchmark_F2(b *testing.B) {
for i := 0; i < b.N; i++ {
f2(100000)
}
}
func Benchmark_F3(b *testing.B) {
for i := 0; i < b.N; i++ {
f3(100000)
}
}
go test -bench=".*" -test.benchmem -count=1
输出的结果是:
Benchmark_F1-4 1 9334113815 ns/op 55401130720 B/op 100056 allocs/op
Benchmark_F2-4 1000 2318146 ns/op 2891600 B/op 18 allocs/op
Benchmark_F3-4 100 14660804 ns/op 11459184 B/op 32 allocs/op
PASS
ok _/Users/taomin/pprof/string 13.394s
从bench的结果来看,10W个字符串的拼接,+
的最差,bytes.NewBuffe
最好。内存和时间上的差距大概在3个数量级左右。这种字符串拼接的优化,除非你的场景确实存在大量字符串拼接,不然不要使用什么bytes
或者strings.join
,直接+
拼接起来就好了。
- 临时对象池
对象池将对象存放到池里,通过复用之前的对象,从而减少分配对象的个数。每次GC,runtime会调用poolCleanup函数来将Pool清空,这样原本Pool中储存的对象会被GC全部回收。这是Pool的一个特性,这个特性会导致:有状态的对象不能储存在Pool中,Pool不能用作连接池;官方文档说明如下:
A Pool is a set of temporary objects that may be individually saved and retrieved.
Any item stored in the Pool may be removed automatically at any time without notification. If the Pool holds the only reference when this happens, the item might be deallocated.
A Pool is safe for use by multiple goroutines simultaneously.
Pool's purpose is to cache allocated but unused items for later reuse, relieving pressure on the garbage collector. That is, it makes it easy to build efficient, thread-safe free lists. However, it is not suitable for all free lists.
An appropriate use of a Pool is to manage a group of temporary items silently shared among and potentially reused by concurrent independent clients of a package. Pool provides a way to amortize allocation overhead across many clients.
An example of good use of a Pool is in the fmt package, which maintains a dynamically-sized store of temporary output buffers. The store scales under load (when many goroutines are actively printing) and shrinks when quiescent.
On the other hand, a free list maintained as part of a short-lived object is not a suitable use for a Pool, since the overhead does not amortize well in that scenario. It is more efficient to have such objects implement their own free list.
A Pool must not be copied after first use.
type Pool struct {
// New optionally specifies a function to generate
// a value when Get would otherwise return nil.
// It may not be changed concurrently with calls to Get.
New func() interface{}
// contains filtered or unexported fields
}
在网上有哥们写的Pool的例子,觉得很能说明问题,这里用他的代码作为例子来说明:
package main
import (
"fmt"
"io"
"net/http"
"sync"
)
// 并发过程使用了多少次 []byte
var mu sync.Mutex
var holder map[string]bool = make(map[string]bool)
// 临时对象池
var p = sync.Pool{
New: func() interface{} {
buffer := make([]byte, 1024)
return &buffer
},
}
func readContent(wg *sync.WaitGroup) {
defer wg.Done()
resp, err := http.Get("http://my.oschina.net/xinxingegeya/home")
if err != nil {
fmt.Println(err)
}
defer resp.Body.Close()
byteSlice := p.Get().(*[]byte) //类型断言
key := fmt.Sprintf("%p", byteSlice)
mu.Lock()
_, ok := holder[key]
if !ok {
holder[key] = true
}
mu.Unlock()
_, err = io.ReadFull(resp.Body, *byteSlice)
if err != nil {
fmt.Println(err)
}
p.Put(byteSlice)
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go readContent(&wg)
}
wg.Wait()
for key, val := range holder {
fmt.Println("Key:", key, "Value:", val)
}
}
Pool要慎用哦,因为它的脾气不太好。
- string转字节数组
string类型的变量里面的值会被复制到字节数组中,雨虹学堂上之前一种优化思路,通过unsafe
包的指针操作,直接对string的地址进行操作,这样避免了内存复制操作,但个人感觉这个优化有点过了,这会让代码失去可读性,例如:
s := "hello world!"
b := []byte(s)
被改成了:
func str2bytes(s string) []byte {
x := (*[2]uintptr)(unsafe.Pointer(&s))
h := [3]uintptr{x[0], x[1], x[1]}
return *(*[]byte)(unsafe.Pointer(&h))
}
上面的代码如果不好好研究下,完全看不懂在搞神马。
以上方案都在围绕一个中心点:减少对象的数量。是的,没错,对象数量是影响gc的关键数量。因为gc首先要去找到这些对象,判断对象当前是否可用,不用的状态就标记上,后续等待回收。gc在1.5之后就采用三色标记法去做垃圾回收:
GC中用三种颜色标记不同的对象:
(1)黑色:本身强引用,并已处理对象中的子引用
(2)灰色:本身强引用,还没处理对象中的子引用
(3)白色:不可达对象
从根出发,根包括:全局指针和goroutine栈上的指针。mark有两个过程:从root开始遍历,标记那些可达的节点为灰色,然后开始遍历灰色队列,将那些从灰色节点可达的节点也加入灰色,同时将自己标记为黑色,重复这个过程知道遍历所有灰色节点,剩下的白色节点就是我们清理掉的节点。mark过程和用户程序是并行的,这样在mark过程完成之后还得re-scan一次,re-scan过程是需要stw。
这里就不多说gc回收原理了,给gc减负应该是golang工程师的必备伎俩。end~