Compare commits
193 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2d1c1449aa | ||
|
|
f348298acd | ||
|
|
ad47104fc7 | ||
|
|
4d4fb64b59 | ||
|
|
9ce3a6e468 | ||
|
|
34e9e362b9 | ||
|
|
f87cbf73e8 | ||
|
|
362ada2ebb | ||
|
|
b4b7221dab | ||
|
|
2d8dc56f4e | ||
|
|
1cf4313d12 | ||
|
|
7bb8f19b90 | ||
|
|
1263a4e751 | ||
|
|
1a432a9b79 | ||
|
|
8951fdbd59 | ||
|
|
77129367fe | ||
|
|
ae2e1e0933 | ||
|
|
4143f14fbd | ||
|
|
8e3e262c2f | ||
|
|
6f11deab96 | ||
|
|
a17acd7351 | ||
|
|
1cbb4195e4 | ||
|
|
c471dd8297 | ||
|
|
25deffb7d6 | ||
|
|
7eb0e0040e | ||
|
|
15994988be | ||
|
|
ae293a6102 | ||
|
|
5ed4702b62 | ||
|
|
287ddc3424 | ||
|
|
57a4227007 | ||
|
|
c9eacd1bf2 | ||
|
|
5f38162fbb | ||
|
|
c755f75a11 | ||
|
|
ac9eb64501 | ||
|
|
3dd013c13c | ||
|
|
ab0205587a | ||
|
|
70955878c9 | ||
|
|
86f017d92f | ||
|
|
446cc3f9a7 | ||
|
|
6529921d71 | ||
|
|
52e441c111 | ||
|
|
a6b169d336 | ||
|
|
5514cfee6c | ||
|
|
b1de184bda | ||
|
|
af2405ba48 | ||
|
|
bee80330b0 | ||
|
|
f1de8659b7 | ||
|
|
bfc5835d82 | ||
|
|
82bc3e27d6 | ||
|
|
5436a95430 | ||
|
|
2c675f2cbe | ||
|
|
8e9427b0c0 | ||
|
|
edcf78f77c | ||
|
|
8f88d14c07 | ||
|
|
1372801b6f | ||
|
|
2af904f442 | ||
|
|
885b27e0d1 | ||
|
|
0207e4731f | ||
|
|
038b6749a3 | ||
|
|
ead577cbfb | ||
|
|
26e9231e48 | ||
|
|
938ddd1141 | ||
|
|
68080539f7 | ||
|
|
5583b303be | ||
|
|
9301c9b49b | ||
|
|
8d2e210522 | ||
|
|
32661552ff | ||
|
|
9a111a59bf | ||
|
|
f5e472ea9f | ||
|
|
7599e2c793 | ||
|
|
3726f5b9c3 | ||
|
|
dee517217e | ||
|
|
983912e44e | ||
|
|
982390f4b2 | ||
|
|
5e3f51a8b0 | ||
|
|
675ae276f9 | ||
|
|
7899c45176 | ||
|
|
1dbb6feb57 | ||
|
|
661685d136 | ||
|
|
28947a0352 | ||
|
|
bd594684ce | ||
|
|
fda3609873 | ||
|
|
d20db0c546 | ||
|
|
07fb22ae70 | ||
|
|
8d5c3944ad | ||
|
|
7cf28aa9f4 | ||
|
|
731867b73c | ||
|
|
1c6df2d9a2 | ||
|
|
ee5a248a39 | ||
|
|
c174f85656 | ||
|
|
2669aac7c9 | ||
|
|
f1aec74b11 | ||
|
|
1d382b2bf6 | ||
|
|
55cac537b1 | ||
|
|
75258fa195 | ||
|
|
cef2ca6d8e | ||
|
|
4b1651bb3e | ||
|
|
78c116bca9 | ||
|
|
a7c46f5582 | ||
|
|
f947d35bc3 | ||
|
|
a49e0166d4 | ||
|
|
7272b592d5 | ||
|
|
cced739d0e | ||
|
|
54ac46b3e4 | ||
|
|
8f9aa2fd64 | ||
|
|
c7b9cd5853 | ||
|
|
f9dfac55b0 | ||
|
|
b4ad1b5465 | ||
|
|
e2b2b7e255 | ||
|
|
80b691564c | ||
|
|
dfc326b771 | ||
|
|
7cfde70a9f | ||
|
|
20837ba983 | ||
|
|
8dda32a599 | ||
|
|
42ce2a4351 | ||
|
|
6574d5cd29 | ||
|
|
4e9ae9a8f5 | ||
|
|
a9ce3cf733 | ||
|
|
65708a0f12 | ||
|
|
e2a3b5f9ee | ||
|
|
7a752537c5 | ||
|
|
abd0b63fe9 | ||
|
|
7ac7cd452b | ||
|
|
94cecdb8c0 | ||
|
|
a8b35ba971 | ||
|
|
bc1ab84b75 | ||
|
|
acc895d2df | ||
|
|
23dbd0a92f | ||
|
|
c069b5cd97 | ||
|
|
f1dfe50d8b | ||
|
|
7d3820175f | ||
|
|
75032fdbb7 | ||
|
|
23b3ad63cf | ||
|
|
7afd0c86cd | ||
|
|
5f0a341d22 | ||
|
|
2a117376b7 | ||
|
|
2fc750532d | ||
|
|
477be63cff | ||
|
|
39b90357db | ||
|
|
2bd916eb73 | ||
|
|
4d1b450b33 | ||
|
|
cb70812cb7 | ||
|
|
42e030e368 | ||
|
|
f84cdc921d | ||
|
|
02c07e7f4f | ||
|
|
d6ea190688 | ||
|
|
b20487b928 | ||
|
|
004cf5693f | ||
|
|
ef8de6feb0 | ||
|
|
d791ebe634 | ||
|
|
5af4a1817e | ||
|
|
e97b8c55f3 | ||
|
|
99fcb76210 | ||
|
|
d4fd34165e | ||
|
|
bd4741a0a0 | ||
|
|
5945c32646 | ||
|
|
9a9dc2594d | ||
|
|
02547e9475 | ||
|
|
d81a823da1 | ||
|
|
df74bcc885 | ||
|
|
094bcebfa3 | ||
|
|
bb2a16720b | ||
|
|
86d9a0c0f3 | ||
|
|
5cf9d72ed3 | ||
|
|
801605676c | ||
|
|
a9dec75e59 | ||
|
|
08d9d90fe1 | ||
|
|
9749db9235 | ||
|
|
11073aaaa5 | ||
|
|
e35ddc4d53 | ||
|
|
99b06e813e | ||
|
|
7164349944 | ||
|
|
bf43b3adee | ||
|
|
6aa4b3c8a9 | ||
|
|
7a9f7ef95e | ||
|
|
ee1a9d3ec7 | ||
|
|
6a69e58be5 | ||
|
|
6e1d788677 | ||
|
|
24f8f789c5 | ||
|
|
2fb779f990 | ||
|
|
0a9d3cd309 | ||
|
|
977b1aba1c | ||
|
|
a02aeeb906 | ||
|
|
7e2e63137e | ||
|
|
4b35219c27 | ||
|
|
0247c4701d | ||
|
|
e2cd0b8e4f | ||
|
|
ee93171c63 | ||
|
|
ddd2302cb2 | ||
|
|
c96d2288b3 | ||
|
|
6f5a088091 | ||
|
|
9a07797e29 | ||
|
|
055a020d33 |
55
CHANGELOG
55
CHANGELOG
@ -1,4 +1,59 @@
|
||||
proxy更新日志
|
||||
v4.5
|
||||
1.优化了mux内网穿透连接管理逻辑,增强了稳定性.
|
||||
2.mux内网穿透增加了tcp和kcp协议支持,之前是tls,现在支持三种协议tcp,tls,kcp.
|
||||
3.keygen参数增加了用法: proxy keygen usage.
|
||||
4.http(s)/socks5代理,tls增加了自签名证书支持.
|
||||
5.建议升级.
|
||||
v4.4
|
||||
1.增加了协议转换sps功能,代理协议转换使用的是sps子命令(socks+https的缩写),
|
||||
sps本身不提供代理功能,只是接受代理请求"转换并转发"给已经存在的http(s)代理
|
||||
或者socks5代理;sps可以把已经存在的http(s)代理或者socks5代理转换为一个端口
|
||||
同时支持http(s)和socks5代理,而且http(s)代理支持正向代理和反向代理(SNI),转
|
||||
换后的SOCKS5代理不支持UDP功能;另外对于已经存在的http(s)代理或者socks5代理,
|
||||
支持tls、tcp、kcp三种模式,支持链式连接,也就是可以多个sps结点层级连接构建
|
||||
加密通道。
|
||||
2.增加了对KCP传输参数的配置,多达17个参数可以自由的配置对kcp传输效率调优。
|
||||
3.内网穿透功能,server和client增加了--session-count参数,可以设置server每个
|
||||
监听端口到bridge打开的session数量,可以设置client到bridge打开的session数量,
|
||||
之前都是1个,现在性能提升N倍,N就是你自己设置的--session-count,这个参数很大
|
||||
程度上解决了多路复用的拥塞问题,v4.4开始默认10个。
|
||||
|
||||
v4.3
|
||||
1.优化了参数keygen生成证书逻辑,避免证书出现特征。
|
||||
2.http(s)和socks代理增加了--dns-address和--dns-ttl参数。
|
||||
用于自己指定proxy访问域名的时候使用的dns(--dns-address)以及解析结果缓存时间(--dns-ttl)秒数,
|
||||
避免系统dns对proxy的干扰,另外缓存功能还能减少dns解析时间提高访问速度。
|
||||
3.优化了http代理的basic认证逻辑。
|
||||
提示:
|
||||
v4.3生成的证书不适用于v4.2及以下版本。
|
||||
|
||||
v4.2
|
||||
1.优化了内网穿透,避免了client意外下线,导致链接信息残留的问题.
|
||||
2.http代理增加了SNI支持,现在http(s)代理模式支持反向代理,支持http(s)透明代理.
|
||||
3.增加了英文手册.
|
||||
|
||||
v4.1
|
||||
1.优化了http(s),socks5代理中的域名智能判断,如果是内网IP,直接走本地网络,提升浏览体验,
|
||||
同时优化了检查机制,判断更快.
|
||||
2.http代理basic认证增加了对https协议的支持,现在basic认证可以控制所有http(s)流量了.
|
||||
3.项目代码增加了依赖类库vendor目录,clone下来就能go build,再也不用担心go get依赖类库
|
||||
失败导致不能编译了.
|
||||
|
||||
v4.0
|
||||
1.内网穿透三端重构了一个multiplexing版本,使用github.com/xtaci/smux实现了tcp链接的多路复用,
|
||||
鼎鼎大名的kcp-go底层就是使用的这个库,基于kcp-go的双边加速工具kcptun的广泛使用已经很好
|
||||
的验证来该库的强大与稳定。multiplexing版的内网穿透对应的子命令分别是server,client,bridge
|
||||
使用方式和参数与之前的子命令tserver,tclient,tserver完全一样,另外server,client增加了
|
||||
压缩传输参数--c,使用压缩传输速度更快。
|
||||
|
||||
v3.9
|
||||
1.增加了守护运行参数--forever,比如: proxy http --forever ,
|
||||
proxy会fork子进程,然后监控子进程,如果子进程异常退出,5秒后重启子进程.
|
||||
该参数配合后台运行参数--daemon和日志参数--log,可以保障proxy一直在后台执行不会因为意外退出,
|
||||
而且可以通过日志文件看到proxy的输出日志内容.
|
||||
比如: proxy http -p ":9090" --forever --log proxy.log --daemon
|
||||
|
||||
v3.8
|
||||
1.增加了日志输出到文件--log参数,比如: --log proxy.log,日志就会输出到proxy.log方便排除问题.
|
||||
|
||||
|
||||
142
Godeps/Godeps.json
generated
Normal file
142
Godeps/Godeps.json
generated
Normal file
@ -0,0 +1,142 @@
|
||||
{
|
||||
"ImportPath": "snail007/proxy",
|
||||
"GoVersion": "go1.9",
|
||||
"GodepVersion": "v80",
|
||||
"Packages": [
|
||||
"./..."
|
||||
],
|
||||
"Deps": [
|
||||
{
|
||||
"ImportPath": "github.com/golang/snappy",
|
||||
"Rev": "553a641470496b2327abcac10b36396bd98e45c9"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/miekg/dns",
|
||||
"Comment": "v1.0.4-1-g40b5202",
|
||||
"Rev": "40b520211179dbf7eaafaa7fe1ffaa1b7d929ee0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/xtaci/kcp-go",
|
||||
"Comment": "v3.19-6-g21da33a",
|
||||
"Rev": "21da33a6696d67c1bffb3c954366499d613097a6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/xtaci/smux",
|
||||
"Comment": "v1.0.6",
|
||||
"Rev": "ebec7ef2574b42a7088cd7751176483e0a27d458"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/pbkdf2",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/ssh",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/time/rate",
|
||||
"Rev": "6dc17368e09b0e8634d71cac8168d853e869a0c7"
|
||||
},
|
||||
{
|
||||
"ImportPath": "gopkg.in/alecthomas/kingpin.v2",
|
||||
"Comment": "v2.2.5",
|
||||
"Rev": "1087e65c9441605df944fb12c33f0fe7072d18ca"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/ed25519",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/ipv4",
|
||||
"Rev": "5ccada7d0a7ba9aeb5d3aca8d3501b4c2a509fec"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/ipv6",
|
||||
"Rev": "5ccada7d0a7ba9aeb5d3aca8d3501b4c2a509fec"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/ed25519/internal/edwards25519",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/bpf",
|
||||
"Rev": "5ccada7d0a7ba9aeb5d3aca8d3501b4c2a509fec"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/internal/iana",
|
||||
"Rev": "5ccada7d0a7ba9aeb5d3aca8d3501b4c2a509fec"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/internal/socket",
|
||||
"Rev": "5ccada7d0a7ba9aeb5d3aca8d3501b4c2a509fec"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/pkg/errors",
|
||||
"Comment": "v0.8.0-6-g602255c",
|
||||
"Rev": "602255cdb6deaf1523ea53ac30eae5554ba7bee9"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/templexxx/reedsolomon",
|
||||
"Comment": "0.1.1-4-g7092926",
|
||||
"Rev": "7092926d7d05c415fabb892b1464a03f8228ab80"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/templexxx/xor",
|
||||
"Comment": "0.1.2",
|
||||
"Rev": "0af8e873c554da75f37f2049cdffda804533d44c"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/tjfoc/gmsm/sm4",
|
||||
"Comment": "v1.0.1-3-g9d99fac",
|
||||
"Rev": "9d99face20b0dd300b7db50b3f69758de41c096a"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/blowfish",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/cast5",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/salsa20",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/tea",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/twofish",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/xtea",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/templexxx/cpufeat",
|
||||
"Rev": "3794dfbfb04749f896b521032f69383f24c3687e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/salsa20/salsa",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/curve25519",
|
||||
"Rev": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/alecthomas/template",
|
||||
"Rev": "a0175ee3bccc567396460bf5acd36800cb10c49c"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/alecthomas/units",
|
||||
"Rev": "2efee857e7cfd4f3d0138cc3cbb1b4966962b93a"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/alecthomas/template/parse",
|
||||
"Rev": "a0175ee3bccc567396460bf5acd36800cb10c49c"
|
||||
}
|
||||
]
|
||||
}
|
||||
5
Godeps/Readme
generated
Normal file
5
Godeps/Readme
generated
Normal file
@ -0,0 +1,5 @@
|
||||
This directory tree is generated automatically by godep.
|
||||
|
||||
Please do not edit.
|
||||
|
||||
See https://github.com/tools/godep for more information.
|
||||
803
README_ZH.md
Normal file
803
README_ZH.md
Normal file
@ -0,0 +1,803 @@
|
||||
<img src="https://github.com/snail007/goproxy/blob/master/docs/images/logo.jpg?raw=true" width="200"/>
|
||||
Proxy是golang实现的高性能http,https,websocket,tcp,udp,socks5代理服务器,支持正向代理、反向代理、透明代理、内网穿透、TCP/UDP端口映射、SSH中转,TLS加密传输,协议转换。
|
||||
|
||||
[点击下载](https://github.com/snail007/goproxy/releases) 官方QQ交流群:189618940
|
||||
|
||||
---
|
||||
|
||||
[](https://github.com/snail007/goproxy/) []() [](https://github.com/snail007/goproxy/releases) [](https://github.com/snail007/goproxy/releases)
|
||||
|
||||
[English Manual](/README.md)
|
||||
|
||||
### Features
|
||||
- 链式代理,程序本身可以作为一级代理,如果设置了上级代理那么可以作为二级代理,乃至N级代理.
|
||||
- 通讯加密,如果程序不是一级代理,而且上级代理也是本程序,那么可以加密和上级代理之间的通讯,采用底层tls高强度加密,安全无特征.
|
||||
- 智能HTTP,SOCKS5代理,会自动判断访问的网站是否屏蔽,如果被屏蔽那么就会使用上级代理(前提是配置了上级代理)访问网站;如果访问的网站没有被屏蔽,为了加速访问,代理会直接访问网站,不使用上级代理.
|
||||
- 域名黑白名单,更加自由的控制网站的访问方式。
|
||||
- 跨平台性,无论你是widows,linux,还是mac,甚至是树莓派,都可以很好的运行proxy.
|
||||
- 多协议支持,支持HTTP(S),TCP,UDP,Websocket,SOCKS5代理.
|
||||
- TCP/UDP端口转发.
|
||||
- 支持内网穿透,协议支持TCP和UDP.
|
||||
- SSH中转,HTTP(S),SOCKS5代理支持SSH中转,上级Linux服务器不需要任何服务端,本地一个proxy即可开心上网.
|
||||
- [KCP](https://github.com/xtaci/kcp-go)协议支持,HTTP(S),SOCKS5代理支持KCP协议传输数据,降低延迟,提升浏览体验.
|
||||
- 集成外部API,HTTP(S),SOCKS5代理认证功能可以与外部HTTP API集成,可以方便的通过外部系统控制代理用户.
|
||||
- 反向代理,支持直接把域名解析到proxy监听的ip,然后proxy就会帮你代理访问需要访问的HTTP(S)网站.
|
||||
- 透明HTTP(S)代理,配合iptables,在网关直接把出去的80,443方向的流量转发到proxy,就能实现无感知的智能路由器代理.
|
||||
- 协议转换,可以把已经存在的HTTP(S)或SOCKS5代理转换为一个端口同时支持HTTP(S)和SOCKS5代理,转换后的SOCKS5代理不支持UDP功能。
|
||||
|
||||
### Why need these?
|
||||
- 当由于某某原因,我们不能访问我们在其它地方的服务,我们可以通过多个相连的proxy节点建立起一个安全的隧道访问我们的服务.
|
||||
- 微信接口本地开发,方便调试.
|
||||
- 远程访问内网机器.
|
||||
- 和小伙伴一起玩局域网游戏.
|
||||
- 以前只能在局域网玩的,现在可以在任何地方玩.
|
||||
- 替代圣剑内网通,显IP内网通,花生壳之类的工具.
|
||||
- ...
|
||||
|
||||
|
||||
本页是v4.5手册,其他版本手册请点击下面链接查看.
|
||||
- [v4.4手册](https://github.com/snail007/goproxy/tree/v4.4)
|
||||
- [v4.3手册](https://github.com/snail007/goproxy/tree/v4.3)
|
||||
- [v4.2手册](https://github.com/snail007/goproxy/tree/v4.2)
|
||||
- [v4.0-v4.1手册](https://github.com/snail007/goproxy/tree/v4.1)
|
||||
- [v3.9手册](https://github.com/snail007/goproxy/tree/v3.9)
|
||||
- [v3.8手册](https://github.com/snail007/goproxy/tree/v3.8)
|
||||
- [v3.6-v3.7手册](https://github.com/snail007/goproxy/tree/v3.6)
|
||||
- [v3.5手册](https://github.com/snail007/goproxy/tree/v3.5)
|
||||
- [v3.4手册](https://github.com/snail007/goproxy/tree/v3.4)
|
||||
- [v3.3手册](https://github.com/snail007/goproxy/tree/v3.3)
|
||||
- [v3.2手册](https://github.com/snail007/goproxy/tree/v3.2)
|
||||
- [v3.1手册](https://github.com/snail007/goproxy/tree/v3.1)
|
||||
- [v3.0手册](https://github.com/snail007/goproxy/tree/v3.0)
|
||||
- [v2.x手册](https://github.com/snail007/goproxy/tree/v2.2)
|
||||
|
||||
### 怎么找到组织?
|
||||
[点击加入交流组织gitter](https://gitter.im/go-proxy/Lobby?utm_source=share-link&utm_medium=link&utm_campaign=share-link)
|
||||
[点击加入交流组织TG](https://t.me/joinchat/GYHXghCDSBmkKZrvu4wIdQ)
|
||||
### 安装
|
||||
1. [快速安装](#自动安装)
|
||||
1. [手动安装](#手动安装)
|
||||
|
||||
### 首次使用必看
|
||||
- [环境](#首次使用必看-1)
|
||||
- [使用配置文件](#使用配置文件)
|
||||
- [调试输出](#调试输出)
|
||||
- [使用日志文件](#使用日志文件)
|
||||
- [后台运行](#后台运行)
|
||||
- [守护运行](#守护运行)
|
||||
- [生成通讯证书文件](#生成加密通讯需要的证书文件)
|
||||
- [安全建议](#安全建议)
|
||||
|
||||
### 手册目录
|
||||
- [1. HTTP代理](#1http代理)
|
||||
- [1.1 普通HTTP代理](#11普通http代理)
|
||||
- [1.2 普通二级HTTP代理](#12普通二级http代理)
|
||||
- [1.3 HTTP二级代理(加密)](#13http二级代理加密)
|
||||
- [1.4 HTTP三级代理(加密)](#14http三级代理加密)
|
||||
- [1.5 Basic认证](#15basic认证)
|
||||
- [1.6 强制走上级HTTP代理](#16http代理流量强制走上级http代理)
|
||||
- [1.7 通过SSH中转](#17https通过ssh中转)
|
||||
- [1.7.1 用户名和密码的方式](#171-ssh用户名和密码的方式)
|
||||
- [1.7.2 用户名和密钥的方式](#172-ssh用户名和密钥的方式)
|
||||
- [1.8 KCP协议传输](#18kcp协议传输)
|
||||
- [1.9 HTTP(S)反向代理](#19-https反向代理)
|
||||
- [1.10 HTTP(S)透明代理](#110-https透明代理)
|
||||
- [1.11 自定义DNS](#111-自定义dns)
|
||||
- [1.12 查看帮助](#112-查看帮助)
|
||||
- [2. TCP代理](#2tcp代理)
|
||||
- [2.1 普通一级TCP代理](#21普通一级tcp代理)
|
||||
- [2.2 普通二级TCP代理](#22普通二级tcp代理)
|
||||
- [2.3 普通三级TCP代理](#23普通三级tcp代理)
|
||||
- [2.4 加密二级TCP代理](#24加密二级tcp代理)
|
||||
- [2.5 加密三级TCP代理](#25加密三级tcp代理)
|
||||
- [2.6 查看帮助](#26查看帮助)
|
||||
- [3. UDP代理](#3udp代理)
|
||||
- [3.1 普通一级UDP代理](#31普通一级udp代理)
|
||||
- [3.2 普通二级UDP代理](#32普通二级udp代理)
|
||||
- [3.3 普通三级UDP代理](#33普通三级udp代理)
|
||||
- [3.4 加密二级UDP代理](#34加密二级udp代理)
|
||||
- [3.5 加密三级UDP代理](#35加密三级udp代理)
|
||||
- [3.6 查看帮助](#36查看帮助)
|
||||
- [4. 内网穿透](#4内网穿透)
|
||||
- [4.1 原理说明](#41原理说明)
|
||||
- [4.2 TCP普通用法](#42tcp普通用法)
|
||||
- [4.3 微信接口本地开发](#43微信接口本地开发)
|
||||
- [4.4 UDP普通用法](#44udp普通用法)
|
||||
- [4.5 高级用法一](#45高级用法一)
|
||||
- [4.6 高级用法一](#46高级用法二)
|
||||
- [4.7 server的-r参数](#47server的-r参数)
|
||||
- [4.8 查看帮助](#48查看帮助)
|
||||
- [5. SOCKS5代理](#5socks5代理)
|
||||
- [5.1 普通SOCKS5代理](#51普通socks5代理)
|
||||
- [5.2 普通二级SOCKS5代理](#52普通二级socks5代理)
|
||||
- [5.3 SOCKS二级代理(加密)](#53socks二级代理加密)
|
||||
- [5.4 SOCKS三级代理(加密)](#54socks三级代理加密)
|
||||
- [5.5 流量强制走上级SOCKS代理](#55socks代理流量强制走上级socks代理)
|
||||
- [5.6 通过SSH中转](#56socks通过ssh中转)
|
||||
- [5.6.1 用户名和密码的方式](#561-ssh用户名和密码的方式)
|
||||
- [5.6.2 用户名和密钥的方式](#562-ssh用户名和密钥的方式)
|
||||
- [5.7 认证](#57认证)
|
||||
- [5.8 KCP协议传输](#58kcp协议传输)
|
||||
- [5.9 自定义DNS](#59自定义dns)
|
||||
- [5.10 查看帮助](#510查看帮助)
|
||||
- [6. 代理协议转换](#6代理协议转换)
|
||||
- [6.1 功能介绍](#61-功能介绍)
|
||||
- [6.2 HTTP(S)转HTTP(S)+SOCKS5](#62-https转httpssocks5)
|
||||
- [6.3 SOCKS5转HTTP(S)+SOCKS5](#63-socks5转httpssocks5)
|
||||
- [6.4 链式连接](#64-链式连接)
|
||||
- [6.5 监听多个端口](#65-监听多个端口)
|
||||
- [6.6 查看帮助](#66-查看帮助)
|
||||
- [7. KCP配置](#7kcp配置)
|
||||
- [7.1 配置介绍](#71-配置介绍)
|
||||
- [7.2 详细配置](#72-详细配置)
|
||||
|
||||
### Fast Start
|
||||
提示:所有操作需要root权限.
|
||||
#### 自动安装
|
||||
#### **0.如果你的VPS是linux64位的系统,那么只需要执行下面一句,就可以完成自动安装和配置.**
|
||||
```shell
|
||||
curl -L https://raw.githubusercontent.com/snail007/goproxy/master/install_auto.sh | bash
|
||||
```
|
||||
安装完成,配置目录是/etc/proxy,更详细的使用方法参考下面的进一步了解.
|
||||
如果安装失败或者你的vps不是linux64位系统,请按照下面的半自动步骤安装:
|
||||
|
||||
#### 手动安装
|
||||
|
||||
#### **1.下载proxy**
|
||||
下载地址:https://github.com/snail007/goproxy/releases
|
||||
```shell
|
||||
cd /root/proxy/
|
||||
wget https://github.com/snail007/goproxy/releases/download/v4.5/proxy-linux-amd64.tar.gz
|
||||
```
|
||||
#### **2.下载自动安装脚本**
|
||||
```shell
|
||||
cd /root/proxy/
|
||||
wget https://raw.githubusercontent.com/snail007/goproxy/master/install.sh
|
||||
chmod +x install.sh
|
||||
./install.sh
|
||||
```
|
||||
|
||||
## **首次使用必看**
|
||||
|
||||
### **环境**
|
||||
接下来的教程,默认系统是linux,程序是proxy;所有操作需要root权限;
|
||||
如果你的是windows,请使用windows版本的proxy.exe即可.
|
||||
|
||||
### **使用配置文件**
|
||||
接下来的教程都是通过命令行参数介绍使用方法,也可以通过读取配置文件获取参数.
|
||||
具体格式是通过@符号指定配置文件,例如:./proxy @configfile.txt
|
||||
configfile.txt里面的格式是,第一行是子命令名称,第二行开始一行一个:参数的长格式=参数值,前后不能有空格和双引号.
|
||||
参数的长格式都是--开头的,短格式参数都是-开头,如果你不知道某个短格式参数对应长格式参数,查看帮助命令即可.
|
||||
比如configfile.txt内容如下:
|
||||
```shell
|
||||
http
|
||||
--local-type=tcp
|
||||
--local=:33080
|
||||
```
|
||||
### **调试输出**
|
||||
默认情况下,日志输出的信息不包含文件行数,某些情况下为了排除程序问题,快速定位问题,
|
||||
可以使用--debug参数,输出代码行数和毫秒时间.
|
||||
|
||||
### **使用日志文件**
|
||||
默认情况下,日志是直接在控制台显示出来的,如果要保存到文件,可以使用--log参数,
|
||||
比如: --log proxy.log,日志就会输出到proxy.log方便排除问题.
|
||||
|
||||
|
||||
### **生成加密通讯需要的证书文件**
|
||||
http,tcp,udp代理过程会和上级通讯,为了安全我们采用加密通讯,当然可以选择不加密通信通讯,本教程所有和上级通讯都采用加密,需要证书文件.
|
||||
在linux上并安装了openssl命令,可以直接通过下面的命令生成证书和key文件.
|
||||
`./proxy keygen`
|
||||
默认会在当前程序目录下面生成证书文件proxy.crt和key文件proxy.key。
|
||||
更多用法:`proxy keygen usage`。
|
||||
|
||||
### **后台运行**
|
||||
默认执行proxy之后,如果要保持proxy运行,不能关闭命令行.
|
||||
如果想在后台运行proxy,命令行可以关闭,只需要在命令最后加上--daemon参数即可.
|
||||
比如:
|
||||
`./proxy http -t tcp -p "0.0.0.0:38080" --daemon`
|
||||
|
||||
### **守护运行**
|
||||
守护运行参数--forever,比如: `proxy http --forever` ,
|
||||
proxy会fork子进程,然后监控子进程,如果子进程异常退出,5秒后重启子进程.
|
||||
该参数配合后台运行参数--daemon和日志参数--log,可以保障proxy一直在后台执行不会因为意外退出,
|
||||
而且可以通过日志文件看到proxy的输出日志内容.
|
||||
比如: `proxy http -p ":9090" --forever --log proxy.log --daemon`
|
||||
|
||||
### **安全建议**
|
||||
当VPS在nat设备后面,vps上网卡IP都是内网IP,这个时候可以通过-g参数添加vps的外网ip防止死循环.
|
||||
假设你的vps外网ip是23.23.23.23,下面命令通过-g参数设置23.23.23.23
|
||||
`./proxy http -g "23.23.23.23"`
|
||||
|
||||
### **1.HTTP代理**
|
||||
#### **1.1.普通HTTP代理**
|
||||

|
||||
`./proxy http -t tcp -p "0.0.0.0:38080"`
|
||||
|
||||
#### **1.2.普通二级HTTP代理**
|
||||
使用本地端口8090,假设上级HTTP代理是`22.22.22.22:8080`
|
||||
`./proxy http -t tcp -p "0.0.0.0:8090" -T tcp -P "22.22.22.22:8080" `
|
||||
默认关闭了连接池,如果要加快访问速度,-L可以开启连接池,10就是连接池大小,0为关闭,
|
||||
开启连接池在网络不好的情况下,稳定不是很好.
|
||||
`./proxy http -t tcp -p "0.0.0.0:8090" -T tcp -P "22.22.22.22:8080" -L 10`
|
||||
我们还可以指定网站域名的黑白名单文件,一行一个域名,匹配规则是最右匹配,比如:baidu.com,匹配的是*.*.baidu.com,黑名单的域名域名直接走上级代理,白名单的域名不走上级代理.
|
||||
`./proxy http -p "0.0.0.0:8090" -T tcp -P "22.22.22.22:8080" -b blocked.txt -d direct.txt`
|
||||
|
||||
#### **1.3.HTTP二级代理(加密)**
|
||||
一级HTTP代理(VPS,IP:22.22.22.22)
|
||||
`./proxy http -t tls -p ":38080" -C proxy.crt -K proxy.key`
|
||||
|
||||
二级HTTP代理(本地Linux)
|
||||
`./proxy http -t tcp -p ":8080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
那么访问本地的8080端口就是访问VPS上面的代理端口38080.
|
||||
|
||||
二级HTTP代理(本地windows)
|
||||
`./proxy.exe http -t tcp -p ":8080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
然后设置你的windos系统中,需要通过代理上网的程序的代理为http模式,地址为:127.0.0.1,端口为:8080,程序即可通过加密通道通过vps上网。
|
||||
|
||||
#### **1.4.HTTP三级代理(加密)**
|
||||
一级HTTP代理VPS_01,IP:22.22.22.22
|
||||
`./proxy http -t tls -p ":38080" -C proxy.crt -K proxy.key`
|
||||
二级HTTP代理VPS_02,IP:33.33.33.33
|
||||
`./proxy http -t tls -p ":28080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
三级HTTP代理(本地)
|
||||
`./proxy http -t tcp -p ":8080" -T tls -P "33.33.33.33:28080" -C proxy.crt -K proxy.key`
|
||||
那么访问本地的8080端口就是访问一级HTTP代理上面的代理端口38080.
|
||||
|
||||
#### **1.5.Basic认证**
|
||||
对于代理HTTP协议我们可以basic进行Basic认证,认证的用户名和密码可以在命令行指定
|
||||
`./proxy http -t tcp -p ":33080" -a "user1:pass1" -a "user2:pass2"`
|
||||
多个用户,重复-a参数即可.
|
||||
也可以放在文件中,格式是一行一个"用户名:密码",然后用-F指定.
|
||||
`./proxy http -t tcp -p ":33080" -F auth-file.txt`
|
||||
|
||||
另外,http(s)代理还集成了外部HTTP API认证,我们可以通过--auth-url参数指定一个http url接口地址,
|
||||
然后有用户连接的时候,proxy会GET方式请求这url,带上下面四个参数,如果返回HTTP状态码204,代表认证成功
|
||||
其它情况认为认证失败.
|
||||
比如:
|
||||
`./proxy http -t tcp -p ":33080" --auth-url "http://test.com/auth.php"`
|
||||
用户连接的时候,proxy会GET方式请求这url("http://test.com/auth.php"),
|
||||
带上user,pass,ip,target四个参数:
|
||||
http://test.com/auth.php?user={USER}&pass={PASS}&ip={IP}&target={TARGET}
|
||||
user:用户名
|
||||
pass:密码
|
||||
ip:用户的IP,比如:192.168.1.200
|
||||
target:用户访问的URL,比如:http://demo.com:80/1.html或https://www.baidu.com:80
|
||||
|
||||
如果没有-a或-F或--auth-url参数,就是关闭Basic认证.
|
||||
|
||||
#### **1.6.HTTP代理流量强制走上级HTTP代理**
|
||||
默认情况下,proxy会智能判断一个网站域名是否无法访问,如果无法访问才走上级HTTP代理.通过--always可以使全部HTTP代理流量强制走上级HTTP代理.
|
||||
`./proxy http --always -t tls -p ":28080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
|
||||
#### **1.7.HTTP(S)通过SSH中转**
|
||||
说明:ssh中转的原理是利用了ssh的转发功能,就是你连接上ssh之后,可以通过ssh代理访问目标地址.
|
||||
假设有:vps
|
||||
- IP是2.2.2.2, ssh端口是22, ssh用户名是:user, ssh用户密码是:demo
|
||||
- 用户user的ssh私钥名称是user.key
|
||||
|
||||
##### ***1.7.1 ssh用户名和密码的方式***
|
||||
本地HTTP(S)代理28080端口,执行:
|
||||
`./proxy http -T ssh -P "2.2.2.2:22" -u user -A demo -t tcp -p ":28080"`
|
||||
##### ***1.7.2 ssh用户名和密钥的方式***
|
||||
本地HTTP(S)代理28080端口,执行:
|
||||
`./proxy http -T ssh -P "2.2.2.2:22" -u user -S user.key -t tcp -p ":28080"`
|
||||
|
||||
#### **1.8.KCP协议传输**
|
||||
KCP协议需要--kcp-key参数设置一个密码用于加密解密数据
|
||||
|
||||
一级HTTP代理(VPS,IP:22.22.22.22)
|
||||
`./proxy http -t kcp -p ":38080" --kcp-key mypassword`
|
||||
|
||||
二级HTTP代理(本地Linux)
|
||||
`./proxy http -t tcp -p ":8080" -T kcp -P "22.22.22.22:38080" --kcp-key mypassword`
|
||||
那么访问本地的8080端口就是访问VPS上面的代理端口38080,数据通过kcp协议传输.
|
||||
|
||||
#### **1.9 HTTP(S)反向代理**
|
||||
proxy不仅支持在其他软件里面通过设置代理的方式,为其他软件提供代理服务,而且支持直接把请求的网站域名解析到proxy监听的ip上,然后proxy监听80和443端口,那么proxy就会自动为你代理访问需要访问的HTTP(S)网站.
|
||||
|
||||
使用方式:
|
||||
在"最后一级proxy代理"的机器上,因为proxy要伪装成所有网站,网站默认的端口HTTP是80,HTTPS是443,让proxy监听80和443端口即可.参数-p多个地址用逗号分割.
|
||||
`./proxy http -t tcp -p :80,:443`
|
||||
|
||||
这个命令就在机器上启动了一个proxy代理,同时监听80和443端口,既可以当作普通的代理使用,也可以直接把需要代理的域名解析到这个机器的IP上.
|
||||
|
||||
如果有上级代理那么参照上面教程设置上级即可,使用方式完全一样.
|
||||
`./proxy http -t tcp -p :80,:443 -T tls -P "2.2.2.2:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
注意:
|
||||
proxy所在的服务器的DNS解析结果不能受到自定义的解析影响,不然就死循环了.
|
||||
|
||||
#### **1.10 HTTP(S)透明代理**
|
||||
该模式需要具有一定的网络基础,相关概念不懂的请自行搜索解决.
|
||||
假设proxy现在在路由器上运行,启动命令如下:
|
||||
`./proxy http -t tcp -p :33080 -T tls -P "2.2.2.2:33090" -C proxy.crt -K proxy.key`
|
||||
|
||||
然后添加iptables规则,下面是参考规则:
|
||||
```shell
|
||||
#上级proxy服务端服务器IP地址:
|
||||
proxy_server_ip=2.2.2.2
|
||||
|
||||
#路由器运行proxy监听的端口:
|
||||
proxy_local_port=33080
|
||||
|
||||
#下面的就不用修改了
|
||||
#create a new chain named PROXY
|
||||
iptables -t nat -N PROXY
|
||||
|
||||
# Ignore your PROXY server's addresses
|
||||
# It's very IMPORTANT, just be careful.
|
||||
|
||||
iptables -t nat -A PROXY -d $proxy_server_ip -j RETURN
|
||||
|
||||
# Ignore LANs IP address
|
||||
iptables -t nat -A PROXY -d 0.0.0.0/8 -j RETURN
|
||||
iptables -t nat -A PROXY -d 10.0.0.0/8 -j RETURN
|
||||
iptables -t nat -A PROXY -d 127.0.0.0/8 -j RETURN
|
||||
iptables -t nat -A PROXY -d 169.254.0.0/16 -j RETURN
|
||||
iptables -t nat -A PROXY -d 172.16.0.0/12 -j RETURN
|
||||
iptables -t nat -A PROXY -d 192.168.0.0/16 -j RETURN
|
||||
iptables -t nat -A PROXY -d 224.0.0.0/4 -j RETURN
|
||||
iptables -t nat -A PROXY -d 240.0.0.0/4 -j RETURN
|
||||
|
||||
# Anything to port 80 443 should be redirected to PROXY's local port
|
||||
iptables -t nat -A PROXY -p tcp --dport 80 -j REDIRECT --to-ports $proxy_local_port
|
||||
iptables -t nat -A PROXY -p tcp --dport 443 -j REDIRECT --to-ports $proxy_local_port
|
||||
|
||||
# Apply the rules to nat client
|
||||
iptables -t nat -A PREROUTING -p tcp -j PROXY
|
||||
# Apply the rules to localhost
|
||||
iptables -t nat -A OUTPUT -p tcp -j PROXY
|
||||
```
|
||||
- 清空整个链 iptables -F 链名比如iptables -t nat -F PROXY
|
||||
- 删除指定的用户自定义链 iptables -X 链名 比如 iptables -t nat -X PROXY
|
||||
- 从所选链中删除规则 iptables -D 链名 规则详情 比如 iptables -t nat -D PROXY -d 223.223.192.0/255.255.240.0 -j RETURN
|
||||
|
||||
#### **1.11 自定义DNS**
|
||||
--dns-address和--dns-ttl参数,用于自己指定proxy访问域名的时候使用的dns(--dns-address)
|
||||
以及解析结果缓存时间(--dns-ttl)秒数,避免系统dns对proxy的干扰,另外缓存功能还能减少dns解析时间提高访问速度.
|
||||
比如:
|
||||
`./proxy http -p ":33080" --dns-address "8.8.8.8:53" --dns-ttl 300`
|
||||
|
||||
#### **1.12 查看帮助**
|
||||
`./proxy help http`
|
||||
|
||||
### **2.TCP代理**
|
||||
|
||||
#### **2.1.普通一级TCP代理**
|
||||

|
||||
本地执行:
|
||||
`./proxy tcp -p ":33080" -T tcp -P "192.168.22.33:22"`
|
||||
那么访问本地33080端口就是访问192.168.22.33的22端口.
|
||||
|
||||
#### **2.2.普通二级TCP代理**
|
||||

|
||||
VPS(IP:22.22.22.33)执行:
|
||||
`./proxy tcp -p ":33080" -T tcp -P "127.0.0.1:8080"`
|
||||
本地执行:
|
||||
`./proxy tcp -p ":23080" -T tcp -P "22.22.22.33:33080"`
|
||||
那么访问本地23080端口就是访问22.22.22.33的8080端口.
|
||||
|
||||
#### **2.3.普通三级TCP代理**
|
||||
一级TCP代理VPS_01,IP:22.22.22.22
|
||||
`./proxy tcp -p ":38080" -T tcp -P "66.66.66.66:8080"`
|
||||
二级TCP代理VPS_02,IP:33.33.33.33
|
||||
`./proxy tcp -p ":28080" -T tcp -P "22.22.22.22:38080"`
|
||||
三级TCP代理(本地)
|
||||
`./proxy tcp -p ":8080" -T tcp -P "33.33.33.33:28080"`
|
||||
那么访问本地8080端口就是通过加密TCP隧道访问66.66.66.66的8080端口.
|
||||
|
||||
#### **2.4.加密二级TCP代理**
|
||||
VPS(IP:22.22.22.33)执行:
|
||||
`./proxy tcp -t tls -p ":33080" -T tcp -P "127.0.0.1:8080" -C proxy.crt -K proxy.key`
|
||||
本地执行:
|
||||
`./proxy tcp -p ":23080" -T tls -P "22.22.22.33:33080" -C proxy.crt -K proxy.key`
|
||||
那么访问本地23080端口就是通过加密TCP隧道访问22.22.22.33的8080端口.
|
||||
|
||||
#### **2.5.加密三级TCP代理**
|
||||
一级TCP代理VPS_01,IP:22.22.22.22
|
||||
`./proxy tcp -t tls -p ":38080" -T tcp -P "66.66.66.66:8080" -C proxy.crt -K proxy.key`
|
||||
二级TCP代理VPS_02,IP:33.33.33.33
|
||||
`./proxy tcp -t tls -p ":28080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
三级TCP代理(本地)
|
||||
`./proxy tcp -p ":8080" -T tls -P "33.33.33.33:28080" -C proxy.crt -K proxy.key`
|
||||
那么访问本地8080端口就是通过加密TCP隧道访问66.66.66.66的8080端口.
|
||||
|
||||
#### **2.6.查看帮助**
|
||||
`./proxy help tcp`
|
||||
|
||||
### **3.UDP代理**
|
||||
|
||||
#### **3.1.普通一级UDP代理**
|
||||
本地执行:
|
||||
`./proxy udp -p ":5353" -T udp -P "8.8.8.8:53"`
|
||||
那么访问本地UDP:5353端口就是访问8.8.8.8的UDP:53端口.
|
||||
|
||||
#### **3.2.普通二级UDP代理**
|
||||
VPS(IP:22.22.22.33)执行:
|
||||
`./proxy tcp -p ":33080" -T udp -P "8.8.8.8:53"`
|
||||
本地执行:
|
||||
`./proxy udp -p ":5353" -T tcp -P "22.22.22.33:33080"`
|
||||
那么访问本地UDP:5353端口就是通过TCP隧道,通过VPS访问8.8.8.8的UDP:53端口.
|
||||
|
||||
#### **3.3.普通三级UDP代理**
|
||||
一级TCP代理VPS_01,IP:22.22.22.22
|
||||
`./proxy tcp -p ":38080" -T udp -P "8.8.8.8:53"`
|
||||
二级TCP代理VPS_02,IP:33.33.33.33
|
||||
`./proxy tcp -p ":28080" -T tcp -P "22.22.22.22:38080"`
|
||||
三级TCP代理(本地)
|
||||
`./proxy udp -p ":5353" -T tcp -P "33.33.33.33:28080"`
|
||||
那么访问本地5353端口就是通过TCP隧道,通过VPS访问8.8.8.8的53端口.
|
||||
|
||||
#### **3.4.加密二级UDP代理**
|
||||
VPS(IP:22.22.22.33)执行:
|
||||
`./proxy tcp -t tls -p ":33080" -T udp -P "8.8.8.8:53" -C proxy.crt -K proxy.key`
|
||||
本地执行:
|
||||
`./proxy udp -p ":5353" -T tls -P "22.22.22.33:33080" -C proxy.crt -K proxy.key`
|
||||
那么访问本地UDP:5353端口就是通过加密TCP隧道,通过VPS访问8.8.8.8的UDP:53端口.
|
||||
|
||||
#### **3.5.加密三级UDP代理**
|
||||
一级TCP代理VPS_01,IP:22.22.22.22
|
||||
`./proxy tcp -t tls -p ":38080" -T udp -P "8.8.8.8:53" -C proxy.crt -K proxy.key`
|
||||
二级TCP代理VPS_02,IP:33.33.33.33
|
||||
`./proxy tcp -t tls -p ":28080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
三级TCP代理(本地)
|
||||
`./proxy udp -p ":5353" -T tls -P "33.33.33.33:28080" -C proxy.crt -K proxy.key`
|
||||
那么访问本地5353端口就是通过加密TCP隧道,通过VPS_01访问8.8.8.8的53端口.
|
||||
|
||||
#### **3.6.查看帮助**
|
||||
`./proxy help udp`
|
||||
|
||||
### **4.内网穿透**
|
||||
#### **4.1、原理说明**
|
||||
内网穿透,分为两个版本,“多链接版本”和“多路复用版本”,一般像web服务这种不是长时间连接的服务建议用“多链接版本”,如果是要保持长时间连接建议使用“多路复用版本”。
|
||||
1. 多链接版本,对应的子命令是tserver,tclient,tbridge。
|
||||
1. 多路复用版本,对应的子命令是server,client,bridge。
|
||||
1. 多链接版本和多路复用版本的参数和使用方式完全一样。
|
||||
1. **多路复用版本的server,client可以开启压缩传输,参数是--c。**
|
||||
1. **server,client要么都开启压缩,要么都不开启,不能只开一个。**
|
||||
|
||||
下面的教程以“多路复用版本”为例子,说明使用方法。
|
||||
内网穿透由三部分组成:client端,server端,bridge端;client和server主动连接bridge端进行桥接.
|
||||
当用户访问server端,流程是:
|
||||
1. 首先server端主动和bridge端建立连接;
|
||||
1. 然后bridge端通知client端连接bridge端和目标端口;
|
||||
1. 然后client端绑定“client端到bridge端”和“client端到目标端口”的连接;
|
||||
1. 然后bridge端把“client过来的连接”与“server端过来的连接”绑定;
|
||||
1. 整个通道建立完成;
|
||||
|
||||
#### **4.2、TCP普通用法**
|
||||
背景:
|
||||
- 公司机器A提供了web服务80端口
|
||||
- 有VPS一个,公网IP:22.22.22.22
|
||||
|
||||
需求:
|
||||
在家里能够通过访问VPS的28080端口访问到公司机器A的80端口
|
||||
|
||||
步骤:
|
||||
1. 在vps上执行
|
||||
`./proxy bridge -p ":33080" -C proxy.crt -K proxy.key`
|
||||
`./proxy server -r ":28080@:80" -P "127.0.0.1:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 在公司机器A上面执行
|
||||
`./proxy client -P "22.22.22.22:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 完成
|
||||
|
||||
#### **4.3、微信接口本地开发**
|
||||
背景:
|
||||
- 自己的笔记本提供了nginx服务80端口
|
||||
- 有VPS一个,公网IP:22.22.22.22
|
||||
|
||||
需求:
|
||||
在微信的开发帐号的网页回调接口配置里面填写地址:http://22.22.22.22/calback.php
|
||||
然后就可以访问到笔记本的80端口下面的calback.php,如果需要绑定域名,可以用自己的域名
|
||||
比如:wx-dev.xxx.com解析到22.22.22.22,然后在自己笔记本的nginx里
|
||||
配置域名wx-dev.xxx.com到具体的目录即可.
|
||||
|
||||
|
||||
步骤:
|
||||
1. 在vps上执行,确保vps的80端口没被其它程序占用.
|
||||
`./proxy bridge -p ":33080" -C proxy.crt -K proxy.key`
|
||||
`./proxy server -r ":80@:80" -P "22.22.22.22:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 在自己笔记本上面执行
|
||||
`./proxy client -P "22.22.22.22:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 完成
|
||||
|
||||
#### **4.4、UDP普通用法**
|
||||
背景:
|
||||
- 公司机器A提供了DNS解析服务,UDP:53端口
|
||||
- 有VPS一个,公网IP:22.22.22.22
|
||||
|
||||
需求:
|
||||
在家里能够通过设置本地dns为22.22.22.22,使用公司机器A进行域名解析服务.
|
||||
|
||||
步骤:
|
||||
1. 在vps上执行
|
||||
`./proxy bridge -p ":33080" -C proxy.crt -K proxy.key`
|
||||
`./proxy server --udp -r ":53@:53" -P "127.0.0.1:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 在公司机器A上面执行
|
||||
`./proxy client -P "22.22.22.22:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 完成
|
||||
|
||||
#### **4.5、高级用法一**
|
||||
背景:
|
||||
- 公司机器A提供了web服务80端口
|
||||
- 有VPS一个,公网IP:22.22.22.22
|
||||
|
||||
需求:
|
||||
为了安全,不想在VPS上能够访问到公司机器A,在家里能够通过访问本机的28080端口,
|
||||
通过加密隧道访问到公司机器A的80端口.
|
||||
|
||||
步骤:
|
||||
1. 在vps上执行
|
||||
`./proxy bridge -p ":33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 在公司机器A上面执行
|
||||
`./proxy client -P "22.22.22.22:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 在家里电脑上执行
|
||||
`./proxy server -r ":28080@:80" -P "22.22.22.22:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 完成
|
||||
|
||||
#### **4.6、高级用法二**
|
||||
提示:
|
||||
如果同时有多个client连接到同一个bridge,需要指定不同的key,可以通过--k参数设定,--k可以是任意唯一字符串,
|
||||
只要在同一个bridge上唯一即可.
|
||||
server连接到bridge的时候,如果同时有多个client连接到同一个bridge,需要使用--k参数选择client.
|
||||
暴露多个端口重复-r参数即可.-r格式是:"本地IP:本地端口@clientHOST:client端口".
|
||||
|
||||
背景:
|
||||
- 公司机器A提供了web服务80端口,ftp服务21端口
|
||||
- 有VPS一个,公网IP:22.22.22.22
|
||||
|
||||
需求:
|
||||
在家里能够通过访问VPS的28080端口访问到公司机器A的80端口
|
||||
在家里能够通过访问VPS的29090端口访问到公司机器A的21端口
|
||||
|
||||
步骤:
|
||||
1. 在vps上执行
|
||||
`./proxy bridge -p ":33080" -C proxy.crt -K proxy.key`
|
||||
`./proxy server -r ":28080@:80" -r ":29090@:21" --k test -P "127.0.0.1:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 在公司机器A上面执行
|
||||
`./proxy client --k test -P "22.22.22.22:33080" -C proxy.crt -K proxy.key`
|
||||
|
||||
1. 完成
|
||||
|
||||
#### **4.7.server的-r参数**
|
||||
-r完整格式是:`PROTOCOL://LOCAL_IP:LOCAL_PORT@[CLIENT_KEY]CLIENT_LOCAL_HOST:CLIENT_LOCAL_PORT`
|
||||
|
||||
4.7.1.协议PROTOCOL:tcp或者udp.
|
||||
比如: `-r "udp://:10053@:53" -r "tcp://:10800@:1080" -r ":8080@:80"`
|
||||
如果指定了--udp参数,PROTOCOL默认为udp,那么:`-r ":8080@:80"`默认为udp;
|
||||
如果没有指定--udp参数,PROTOCOL默认为tcp,那么:`-r ":8080@:80"`默认为tcp;
|
||||
|
||||
4.7.2.CLIENT_KEY:默认是default.
|
||||
比如: -r "udp://:10053@[test1]:53" -r "tcp://:10800@[test2]:1080" -r ":8080@:80"
|
||||
如果指定了--k参数,比如--k test,那么:`-r ":8080@:80"`CLIENT_KEY默认为test;
|
||||
如果没有指定--k参数,那么:`-r ":8080@:80"`CLIENT_KEY默认为default;
|
||||
|
||||
4.7.3.LOCAL_IP为空默认是:`0.0.0.0`,CLIENT_LOCAL_HOST为空默认是:`127.0.0.1`;
|
||||
|
||||
#### **4.8.查看帮助**
|
||||
`./proxy help bridge`
|
||||
`./proxy help server`
|
||||
`./proxy help client`
|
||||
|
||||
### **5.SOCKS5代理**
|
||||
提示:SOCKS5代理,支持CONNECT,UDP协议,不支持BIND,支持用户名密码认证.
|
||||
#### **5.1.普通SOCKS5代理**
|
||||
`./proxy socks -t tcp -p "0.0.0.0:38080"`
|
||||
|
||||
#### **5.2.普通二级SOCKS5代理**
|
||||

|
||||
使用本地端口8090,假设上级SOCKS5代理是`22.22.22.22:8080`
|
||||
`./proxy socks -t tcp -p "0.0.0.0:8090" -T tcp -P "22.22.22.22:8080" `
|
||||
我们还可以指定网站域名的黑白名单文件,一行一个域名,匹配规则是最右匹配,比如:baidu.com,匹配的是*.*.baidu.com,黑名单的域名域名直接走上级代理,白名单的域名不走上级代理.
|
||||
`./proxy socks -p "0.0.0.0:8090" -T tcp -P "22.22.22.22:8080" -b blocked.txt -d direct.txt`
|
||||
|
||||
#### **5.3.SOCKS二级代理(加密)**
|
||||
一级SOCKS代理(VPS,IP:22.22.22.22)
|
||||
`./proxy socks -t tls -p ":38080" -C proxy.crt -K proxy.key`
|
||||
|
||||
二级SOCKS代理(本地Linux)
|
||||
`./proxy socks -t tcp -p ":8080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
那么访问本地的8080端口就是访问VPS上面的代理端口38080.
|
||||
|
||||
二级SOCKS代理(本地windows)
|
||||
`./proxy.exe socks -t tcp -p ":8080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
然后设置你的windos系统中,需要通过代理上网的程序的代理为socks5模式,地址为:127.0.0.1,端口为:8080,程序即可通过加密通道通过vps上网。
|
||||
|
||||
#### **5.4.SOCKS三级代理(加密)**
|
||||
一级SOCKS代理VPS_01,IP:22.22.22.22
|
||||
`./proxy socks -t tls -p ":38080" -C proxy.crt -K proxy.key`
|
||||
二级SOCKS代理VPS_02,IP:33.33.33.33
|
||||
`./proxy socks -t tls -p ":28080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
三级SOCKS代理(本地)
|
||||
`./proxy socks -t tcp -p ":8080" -T tls -P "33.33.33.33:28080" -C proxy.crt -K proxy.key`
|
||||
那么访问本地的8080端口就是访问一级SOCKS代理上面的代理端口38080.
|
||||
|
||||
#### **5.5.SOCKS代理流量强制走上级SOCKS代理**
|
||||
默认情况下,proxy会智能判断一个网站域名是否无法访问,如果无法访问才走上级SOCKS代理.通过--always可以使全部SOCKS代理流量强制走上级SOCKS代理.
|
||||
`./proxy socks --always -t tls -p ":28080" -T tls -P "22.22.22.22:38080" -C proxy.crt -K proxy.key`
|
||||
|
||||
#### **5.6.SOCKS通过SSH中转**
|
||||
说明:ssh中转的原理是利用了ssh的转发功能,就是你连接上ssh之后,可以通过ssh代理访问目标地址.
|
||||
假设有:vps
|
||||
- IP是2.2.2.2, ssh端口是22, ssh用户名是:user, ssh用户密码是:demo
|
||||
- 用户user的ssh私钥名称是user.key
|
||||
|
||||
##### ***5.6.1 ssh用户名和密码的方式***
|
||||
本地SOCKS5代理28080端口,执行:
|
||||
`./proxy socks -T ssh -P "2.2.2.2:22" -u user -A demo -t tcp -p ":28080"`
|
||||
##### ***5.6.2 ssh用户名和密钥的方式***
|
||||
本地SOCKS5代理28080端口,执行:
|
||||
`./proxy socks -T ssh -P "2.2.2.2:22" -u user -S user.key -t tcp -p ":28080"`
|
||||
|
||||
那么访问本地的28080端口就是通过VPS访问目标地址.
|
||||
|
||||
#### **5.7.认证**
|
||||
对于socks5代理协议我们可以进行用户名密码认证,认证的用户名和密码可以在命令行指定
|
||||
`./proxy socks -t tcp -p ":33080" -a "user1:pass1" -a "user2:pass2"`
|
||||
多个用户,重复-a参数即可.
|
||||
也可以放在文件中,格式是一行一个"用户名:密码",然后用-F指定.
|
||||
`./proxy socks -t tcp -p ":33080" -F auth-file.txt`
|
||||
|
||||
另外,socks5代理还集成了外部HTTP API认证,我们可以通过--auth-url参数指定一个http url接口地址,
|
||||
然后有用户连接的时候,proxy会GET方式请求这url,带上下面四个参数,如果返回HTTP状态码204,代表认证成功
|
||||
其它情况认为认证失败.
|
||||
比如:
|
||||
`./proxy socks -t tcp -p ":33080" --auth-url "http://test.com/auth.php"`
|
||||
用户连接的时候,proxy会GET方式请求这url("http://test.com/auth.php"),
|
||||
带上user,pass,ip,三个参数:
|
||||
http://test.com/auth.php?user={USER}&pass={PASS}&ip={IP}
|
||||
user:用户名
|
||||
pass:密码
|
||||
ip:用户的IP,比如:192.168.1.200
|
||||
|
||||
如果没有-a或-F或--auth-url参数,就是关闭认证.
|
||||
|
||||
#### **5.8.KCP协议传输**
|
||||
KCP协议需要--kcp-key参数设置一个密码用于加密解密数据
|
||||
|
||||
一级HTTP代理(VPS,IP:22.22.22.22)
|
||||
`./proxy socks -t kcp -p ":38080" --kcp-key mypassword`
|
||||
|
||||
二级HTTP代理(本地Linux)
|
||||
`./proxy socks -t tcp -p ":8080" -T kcp -P "22.22.22.22:38080" --kcp-key mypassword`
|
||||
那么访问本地的8080端口就是访问VPS上面的代理端口38080,数据通过kcp协议传输.
|
||||
|
||||
#### **5.9.自定义DNS**
|
||||
--dns-address和--dns-ttl参数,用于自己指定proxy访问域名的时候使用的dns(--dns-address)
|
||||
以及解析结果缓存时间(--dns-ttl)秒数,避免系统dns对proxy的干扰,另外缓存功能还能减少dns解析时间提高访问速度.
|
||||
比如:
|
||||
`./proxy socks -p ":33080" --dns-address "8.8.8.8:53" --dns-ttl 300`
|
||||
|
||||
#### **5.10.查看帮助**
|
||||
`./proxy help socks`
|
||||
|
||||
### **6.代理协议转换**
|
||||
|
||||
#### **6.1 功能介绍**
|
||||
代理协议转换使用的是sps子命令(socks+https的缩写),sps本身不提供代理功能,只是接受代理请求"转换并转发"给已经存在的http(s)代理或者socks5代理;sps可以把已经存在的http(s)代理或者socks5代理转换为一个端口同时支持http(s)和socks5代理,而且http(s)代理支持正向代理和反向代理(SNI),转换后的SOCKS5代理不支持UDP功能;另外对于已经存在的http(s)代理或者socks5代理,支持tls、tcp、kcp三种模式,支持链式连接,也就是可以多个sps结点层级连接构建加密通道。
|
||||
|
||||
#### **6.2 HTTP(S)转HTTP(S)+SOCKS5**
|
||||
假设已经存在一个普通的http(s)代理:127.0.0.1:8080,现在我们把它转为同时支持http(s)和socks5的普通代理,转换后的本地端口为18080。
|
||||
命令如下:
|
||||
`./proxy sps -S http -T tcp -P 127.0.0.1:8080 -t tcp -p :18080`
|
||||
|
||||
假设已经存在一个tls的http(s)代理:127.0.0.1:8080,现在我们把它转为同时支持http(s)和socks5的普通代理,转换后的本地端口为18080,tls需要证书文件。
|
||||
命令如下:
|
||||
`./proxy sps -S http -T tls -P 127.0.0.1:8080 -t tcp -p :18080 -C proxy.crt -K proxy.key`
|
||||
|
||||
假设已经存在一个kcp的http(s)代理(密码是:demo123):127.0.0.1:8080,现在我们把它转为同时支持http(s)和socks5的普通代理,转换后的本地端口为18080。
|
||||
命令如下:
|
||||
`./proxy sps -S http -T kcp -P 127.0.0.1:8080 -t tcp -p :18080 --kcp-key demo123`
|
||||
|
||||
#### **6.3 SOCKS5转HTTP(S)+SOCKS5**
|
||||
假设已经存在一个普通的socks5代理:127.0.0.1:8080,现在我们把它转为同时支持http(s)和socks5的普通代理,转换后的本地端口为18080。
|
||||
命令如下:
|
||||
`./proxy sps -S socks -T tcp -P 127.0.0.1:8080 -t tcp -p :18080`
|
||||
|
||||
假设已经存在一个tls的socks5代理:127.0.0.1:8080,现在我们把它转为同时支持http(s)和socks5的普通代理,转换后的本地端口为18080,tls需要证书文件。
|
||||
命令如下:
|
||||
`./proxy sps -S socks -T tls -P 127.0.0.1:8080 -t tcp -p :18080 -C proxy.crt -K proxy.key`
|
||||
|
||||
假设已经存在一个kcp的socks5代理(密码是:demo123):127.0.0.1:8080,现在我们把它转为同时支持http(s)和socks5的普通代理,转换后的本地端口为18080。
|
||||
命令如下:
|
||||
`./proxy sps -S socks -T kcp -P 127.0.0.1:8080 -t tcp -p :18080 --kcp-key demo123`
|
||||
|
||||
#### **6.4 链式连接**
|
||||
上面提过多个sps结点可以层级连接构建加密通道,假设有如下vps和家里的pc电脑。
|
||||
vps01:2.2.2.2
|
||||
vps02:3.3.3.3
|
||||
现在我们想利用pc和vps01和vps02构建一个加密通道,本例子用tls加密也可以用kcp,在pc上访问本地18080端口就是访问vps01的本地8080端口。
|
||||
首先在vps01(2.2.2.2)上我们运行一个只有本地可以访问的http(s)代理,执行:
|
||||
`./proxy http -t tcp -p 127.0.0.1:8080`
|
||||
|
||||
然后在vps01(2.2.2.2)上运行一个sps结点,执行:
|
||||
`./proxy sps -S http -T tcp -P 127.0.0.1:8080 -t tls -p :8081 -C proxy.crt -K proxy.key`
|
||||
|
||||
然后在vps02(3.3.3.3)上运行一个sps结点,执行:
|
||||
`./proxy sps -S http -T tls -P 2.2.2.2:8081 -t tls -p :8082 -C proxy.crt -K proxy.key`
|
||||
|
||||
然后在pc上运行一个sps结点,执行:
|
||||
`./proxy sps -S http -T tls -P 3.3.3.3:8082 -t tcp -p :18080 -C proxy.crt -K proxy.key`
|
||||
|
||||
完成。
|
||||
|
||||
#### **6.5 监听多个端口**
|
||||
一般情况下监听一个端口就可以,不过如果作为反向代理需要同时监听80和443两个端口,那么-p参数是支持的,
|
||||
格式是:`-p 0.0.0.0:80,0.0.0.0:443`,多个绑定用逗号分隔即可。
|
||||
|
||||
#### **6.6 查看帮助**
|
||||
`./proxy help sps`
|
||||
|
||||
### **7.KCP配置**
|
||||
|
||||
#### **7.1 配置介绍**
|
||||
proxy的很多功能都支持kcp协议,凡是使用了kcp协议的功能都支持这里介绍的配置参数。
|
||||
所以这里统一对KCP配置参数进行介绍。
|
||||
|
||||
#### **7.2 详细配置**
|
||||
所有的KCP配置参数共有17个,你可以都不用设置,他们都有默认值,如果为了或者最好的效果,
|
||||
就需要自己根据自己根据网络情况对参数进行配置。由于kcp配置很复杂需要一定的网络基础知识,
|
||||
如果想获得kcp参数更详细的配置和解说,请自行搜索。每个参数的命令行名称以及默认值和简单的功能说明如下:
|
||||
```
|
||||
--kcp-key="secrect" pre-shared secret between client and server
|
||||
--kcp-method="aes" encrypt/decrypt method, can be: aes, aes-128, aes-192, salsa20, blowfish,
|
||||
twofish, cast5, 3des, tea, xtea, xor, sm4, none
|
||||
--kcp-mode="fast" profiles: fast3, fast2, fast, normal, manual
|
||||
--kcp-mtu=1350 set maximum transmission unit for UDP packets
|
||||
--kcp-sndwnd=1024 set send window size(num of packets)
|
||||
--kcp-rcvwnd=1024 set receive window size(num of packets)
|
||||
--kcp-ds=10 set reed-solomon erasure coding - datashard
|
||||
--kcp-ps=3 set reed-solomon erasure coding - parityshard
|
||||
--kcp-dscp=0 set DSCP(6bit)
|
||||
--kcp-nocomp disable compression
|
||||
--kcp-acknodelay be carefull! flush ack immediately when a packet is received
|
||||
--kcp-nodelay=0 be carefull!
|
||||
--kcp-interval=50 be carefull!
|
||||
--kcp-resend=0 be carefull!
|
||||
--kcp-nc=0 be carefull! no congestion
|
||||
--kcp-sockbuf=4194304 be carefull!
|
||||
--kcp-keepalive=10 be carefull!
|
||||
```
|
||||
提示:
|
||||
参数:--kcp-mode中的四种fast3, fast2, fast, normal模式,
|
||||
相当于设置了下面四个参数:
|
||||
normal:`--nodelay=0 --interval=40 --resend=2 --nc=1`
|
||||
fast :`--nodelay=0 --interval=30 --resend=2 --nc=1`
|
||||
fast2:`--nodelay=1 --interval=20 --resend=2 --nc=1`
|
||||
fast3:`--nodelay=1 --interval=10 --resend=2 --nc=1`
|
||||
|
||||
### TODO
|
||||
- http,socks代理多个上级负载均衡?
|
||||
- http(s)代理增加pac支持?
|
||||
- 欢迎加群反馈...
|
||||
|
||||
### 如何使用源码?
|
||||
建议go1.8.5,不保证>=1.9能用.
|
||||
cd进入你的go src目录,新建文件夹snail007,
|
||||
cd进入snail007,然后git clone https://github.com/snail007/goproxy.git ./proxy 即可.
|
||||
编译直接:go build
|
||||
运行: go run *.go
|
||||
utils是工具包,service是具体的每个服务类.
|
||||
|
||||
### License
|
||||
Proxy is licensed under GPLv3 license.
|
||||
### Contact
|
||||
QQ交流群:189618940
|
||||
|
||||
|
||||
### Donation
|
||||
如果proxy帮助你解决了很多问题,你可以通过下面的捐赠更好的支持proxy.
|
||||
<img src="https://github.com/snail007/goproxy/blob/master/docs/images/alipay.jpg?raw=true" width="200"/>
|
||||
<img src="https://github.com/snail007/goproxy/blob/master/docs/images/wxpay.jpg?raw=true" width="200"/>
|
||||
|
||||
|
||||
237
config.go
237
config.go
@ -1,19 +1,26 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"crypto/sha1"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"proxy/services"
|
||||
"proxy/utils"
|
||||
"snail007/proxy/services"
|
||||
"snail007/proxy/services/kcpcfg"
|
||||
"snail007/proxy/utils"
|
||||
"time"
|
||||
|
||||
kcp "github.com/xtaci/kcp-go"
|
||||
"golang.org/x/crypto/pbkdf2"
|
||||
kingpin "gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
var (
|
||||
app *kingpin.Application
|
||||
service *services.ServiceItem
|
||||
cmd *exec.Cmd
|
||||
)
|
||||
|
||||
func initConfig() (err error) {
|
||||
@ -31,18 +38,42 @@ func initConfig() (err error) {
|
||||
tunnelServerArgs := services.TunnelServerArgs{}
|
||||
tunnelClientArgs := services.TunnelClientArgs{}
|
||||
tunnelBridgeArgs := services.TunnelBridgeArgs{}
|
||||
muxServerArgs := services.MuxServerArgs{}
|
||||
muxClientArgs := services.MuxClientArgs{}
|
||||
muxBridgeArgs := services.MuxBridgeArgs{}
|
||||
udpArgs := services.UDPArgs{}
|
||||
socksArgs := services.SocksArgs{}
|
||||
spsArgs := services.SPSArgs{}
|
||||
kcpArgs := kcpcfg.KCPConfigArgs{}
|
||||
//build srvice args
|
||||
app = kingpin.New("proxy", "happy with proxy")
|
||||
app.Author("snail").Version(APP_VERSION)
|
||||
debug := app.Flag("debug", "debug log output").Default("false").Bool()
|
||||
daemon := app.Flag("daemon", "run proxy in background").Default("false").Bool()
|
||||
forever := app.Flag("forever", "run proxy in forever,fail and retry").Default("false").Bool()
|
||||
logfile := app.Flag("log", "log file path").Default("").String()
|
||||
kcpArgs.Key = app.Flag("kcp-key", "pre-shared secret between client and server").Default("secrect").String()
|
||||
kcpArgs.Crypt = app.Flag("kcp-method", "encrypt/decrypt method, can be: aes, aes-128, aes-192, salsa20, blowfish, twofish, cast5, 3des, tea, xtea, xor, sm4, none").Default("aes").Enum("aes", "aes-128", "aes-192", "salsa20", "blowfish", "twofish", "cast5", "3des", "tea", "xtea", "xor", "sm4", "none")
|
||||
kcpArgs.Mode = app.Flag("kcp-mode", "profiles: fast3, fast2, fast, normal, manual").Default("fast").Enum("fast3", "fast2", "fast", "normal", "manual")
|
||||
kcpArgs.MTU = app.Flag("kcp-mtu", "set maximum transmission unit for UDP packets").Default("1350").Int()
|
||||
kcpArgs.SndWnd = app.Flag("kcp-sndwnd", "set send window size(num of packets)").Default("1024").Int()
|
||||
kcpArgs.RcvWnd = app.Flag("kcp-rcvwnd", "set receive window size(num of packets)").Default("1024").Int()
|
||||
kcpArgs.DataShard = app.Flag("kcp-ds", "set reed-solomon erasure coding - datashard").Default("10").Int()
|
||||
kcpArgs.ParityShard = app.Flag("kcp-ps", "set reed-solomon erasure coding - parityshard").Default("3").Int()
|
||||
kcpArgs.DSCP = app.Flag("kcp-dscp", "set DSCP(6bit)").Default("0").Int()
|
||||
kcpArgs.NoComp = app.Flag("kcp-nocomp", "disable compression").Default("false").Bool()
|
||||
kcpArgs.AckNodelay = app.Flag("kcp-acknodelay", "be carefull! flush ack immediately when a packet is received").Default("true").Bool()
|
||||
kcpArgs.NoDelay = app.Flag("kcp-nodelay", "be carefull!").Default("0").Int()
|
||||
kcpArgs.Interval = app.Flag("kcp-interval", "be carefull!").Default("50").Int()
|
||||
kcpArgs.Resend = app.Flag("kcp-resend", "be carefull!").Default("0").Int()
|
||||
kcpArgs.NoCongestion = app.Flag("kcp-nc", "be carefull! no congestion").Default("0").Int()
|
||||
kcpArgs.SockBuf = app.Flag("kcp-sockbuf", "be carefull!").Default("4194304").Int()
|
||||
kcpArgs.KeepAlive = app.Flag("kcp-keepalive", "be carefull!").Default("10").Int()
|
||||
|
||||
//########http#########
|
||||
http := app.Command("http", "proxy on http mode")
|
||||
httpArgs.Parent = http.Flag("parent", "parent address, such as: \"23.32.32.19:28008\"").Default("").Short('P').String()
|
||||
httpArgs.CaCertFile = http.Flag("ca", "ca cert file for tls").Default("").String()
|
||||
httpArgs.CertFile = http.Flag("cert", "cert file for tls").Short('C').Default("proxy.crt").String()
|
||||
httpArgs.KeyFile = http.Flag("key", "key file for tls").Short('K').Default("proxy.key").String()
|
||||
httpArgs.LocalType = http.Flag("local-type", "local protocol type <tls|tcp|kcp>").Default("tcp").Short('t').Enum("tls", "tcp", "kcp")
|
||||
@ -57,18 +88,18 @@ func initConfig() (err error) {
|
||||
httpArgs.Auth = http.Flag("auth", "http basic auth username and password, mutiple user repeat -a ,such as: -a user1:pass1 -a user2:pass2").Short('a').Strings()
|
||||
httpArgs.PoolSize = http.Flag("pool-size", "conn pool size , which connect to parent proxy, zero: means turn off pool").Short('L').Default("0").Int()
|
||||
httpArgs.CheckParentInterval = http.Flag("check-parent-interval", "check if proxy is okay every interval seconds,zero: means no check").Short('I').Default("3").Int()
|
||||
httpArgs.Local = http.Flag("local", "local ip:port to listen").Short('p').Default(":33080").String()
|
||||
httpArgs.Local = http.Flag("local", "local ip:port to listen,multiple address use comma split,such as: 0.0.0.0:80,0.0.0.0:443").Short('p').Default(":33080").String()
|
||||
httpArgs.SSHUser = http.Flag("ssh-user", "user for ssh").Short('u').Default("").String()
|
||||
httpArgs.SSHKeyFile = http.Flag("ssh-key", "private key file for ssh").Short('S').Default("").String()
|
||||
httpArgs.SSHKeyFileSalt = http.Flag("ssh-keysalt", "salt of ssh private key").Short('s').Default("").String()
|
||||
httpArgs.SSHPassword = http.Flag("ssh-password", "password for ssh").Short('A').Default("").String()
|
||||
httpArgs.KCPKey = http.Flag("kcp-key", "key for kcp encrypt/decrypt data").Short('B').Default("encrypt").String()
|
||||
httpArgs.KCPMethod = http.Flag("kcp-method", "kcp encrypt/decrypt method").Short('M').Default("3des").String()
|
||||
httpArgs.LocalIPS = http.Flag("local bind ips", "if your host behind a nat,set your public ip here avoid dead loop").Short('g').Strings()
|
||||
httpArgs.AuthURL = http.Flag("auth-url", "http basic auth username and password will send to this url,response http code equal to 'auth-code' means ok,others means fail.").Default("").String()
|
||||
httpArgs.AuthURLTimeout = http.Flag("auth-timeout", "access 'auth-url' timeout milliseconds").Default("3000").Int()
|
||||
httpArgs.AuthURLOkCode = http.Flag("auth-code", "access 'auth-url' success http code").Default("204").Int()
|
||||
httpArgs.AuthURLRetry = http.Flag("auth-retry", "access 'auth-url' fail and retry count").Default("1").Int()
|
||||
httpArgs.DNSAddress = http.Flag("dns-address", "if set this, proxy will use this dns for resolve doamin").Short('q').Default("").String()
|
||||
httpArgs.DNSTTL = http.Flag("dns-ttl", "caching seconds of dns query result").Short('e').Default("300").Int()
|
||||
|
||||
//########tcp#########
|
||||
tcp := app.Command("tcp", "proxy on tcp mode")
|
||||
@ -81,8 +112,6 @@ func initConfig() (err error) {
|
||||
tcpArgs.PoolSize = tcp.Flag("pool-size", "conn pool size , which connect to parent proxy, zero: means turn off pool").Short('L').Default("0").Int()
|
||||
tcpArgs.CheckParentInterval = tcp.Flag("check-parent-interval", "check if proxy is okay every interval seconds,zero: means no check").Short('I').Default("3").Int()
|
||||
tcpArgs.Local = tcp.Flag("local", "local ip:port to listen").Short('p').Default(":33080").String()
|
||||
tcpArgs.KCPKey = tcp.Flag("kcp-key", "key for kcp encrypt/decrypt data").Short('B').Default("encrypt").String()
|
||||
tcpArgs.KCPMethod = tcp.Flag("kcp-method", "kcp encrypt/decrypt method").Short('M').Default("3des").String()
|
||||
|
||||
//########udp#########
|
||||
udp := app.Command("udp", "proxy on udp mode")
|
||||
@ -95,6 +124,38 @@ func initConfig() (err error) {
|
||||
udpArgs.CheckParentInterval = udp.Flag("check-parent-interval", "check if proxy is okay every interval seconds,zero: means no check").Short('I').Default("3").Int()
|
||||
udpArgs.Local = udp.Flag("local", "local ip:port to listen").Short('p').Default(":33080").String()
|
||||
|
||||
//########mux-server#########
|
||||
muxServer := app.Command("server", "proxy on mux server mode")
|
||||
muxServerArgs.Parent = muxServer.Flag("parent", "parent address, such as: \"23.32.32.19:28008\"").Default("").Short('P').String()
|
||||
muxServerArgs.ParentType = muxServer.Flag("parent-type", "parent protocol type <tls|tcp|kcp>").Default("tls").Short('T').Enum("tls", "tcp", "kcp")
|
||||
muxServerArgs.CertFile = muxServer.Flag("cert", "cert file for tls").Short('C').Default("proxy.crt").String()
|
||||
muxServerArgs.KeyFile = muxServer.Flag("key", "key file for tls").Short('K').Default("proxy.key").String()
|
||||
muxServerArgs.Timeout = muxServer.Flag("timeout", "tcp timeout with milliseconds").Short('i').Default("2000").Int()
|
||||
muxServerArgs.IsUDP = muxServer.Flag("udp", "proxy on udp mux server mode").Default("false").Bool()
|
||||
muxServerArgs.Key = muxServer.Flag("k", "client key").Default("default").String()
|
||||
muxServerArgs.Route = muxServer.Flag("route", "local route to client's network, such as: PROTOCOL://LOCAL_IP:LOCAL_PORT@[CLIENT_KEY]CLIENT_LOCAL_HOST:CLIENT_LOCAL_PORT").Short('r').Default("").Strings()
|
||||
muxServerArgs.IsCompress = muxServer.Flag("c", "compress data when tcp|tls mode").Default("false").Bool()
|
||||
muxServerArgs.SessionCount = muxServer.Flag("session-count", "session count which connect to bridge").Short('n').Default("10").Int()
|
||||
|
||||
//########mux-client#########
|
||||
muxClient := app.Command("client", "proxy on mux client mode")
|
||||
muxClientArgs.Parent = muxClient.Flag("parent", "parent address, such as: \"23.32.32.19:28008\"").Default("").Short('P').String()
|
||||
muxClientArgs.ParentType = muxClient.Flag("parent-type", "parent protocol type <tls|tcp|kcp>").Default("tls").Short('T').Enum("tls", "tcp", "kcp")
|
||||
muxClientArgs.CertFile = muxClient.Flag("cert", "cert file for tls").Short('C').Default("proxy.crt").String()
|
||||
muxClientArgs.KeyFile = muxClient.Flag("key", "key file for tls").Short('K').Default("proxy.key").String()
|
||||
muxClientArgs.Timeout = muxClient.Flag("timeout", "tcp timeout with milliseconds").Short('i').Default("2000").Int()
|
||||
muxClientArgs.Key = muxClient.Flag("k", "key same with server").Default("default").String()
|
||||
muxClientArgs.IsCompress = muxClient.Flag("c", "compress data when tcp|tls mode").Default("false").Bool()
|
||||
muxClientArgs.SessionCount = muxClient.Flag("session-count", "session count which connect to bridge").Short('n').Default("10").Int()
|
||||
|
||||
//########mux-bridge#########
|
||||
muxBridge := app.Command("bridge", "proxy on mux bridge mode")
|
||||
muxBridgeArgs.CertFile = muxBridge.Flag("cert", "cert file for tls").Short('C').Default("proxy.crt").String()
|
||||
muxBridgeArgs.KeyFile = muxBridge.Flag("key", "key file for tls").Short('K').Default("proxy.key").String()
|
||||
muxBridgeArgs.Timeout = muxBridge.Flag("timeout", "tcp timeout with milliseconds").Short('i').Default("2000").Int()
|
||||
muxBridgeArgs.Local = muxBridge.Flag("local", "local ip:port to listen").Short('p').Default(":33080").String()
|
||||
muxBridgeArgs.LocalType = muxBridge.Flag("local-type", "local protocol type <tls|tcp|kcp>").Default("tls").Short('t').Enum("tls", "tcp", "kcp")
|
||||
|
||||
//########tunnel-server#########
|
||||
tunnelServer := app.Command("tserver", "proxy on tunnel server mode")
|
||||
tunnelServerArgs.Parent = tunnelServer.Flag("parent", "parent address, such as: \"23.32.32.19:28008\"").Default("").Short('P').String()
|
||||
@ -103,8 +164,7 @@ func initConfig() (err error) {
|
||||
tunnelServerArgs.Timeout = tunnelServer.Flag("timeout", "tcp timeout with milliseconds").Short('t').Default("2000").Int()
|
||||
tunnelServerArgs.IsUDP = tunnelServer.Flag("udp", "proxy on udp tunnel server mode").Default("false").Bool()
|
||||
tunnelServerArgs.Key = tunnelServer.Flag("k", "client key").Default("default").String()
|
||||
tunnelServerArgs.Route = tunnelServer.Flag("route", "local route to client's network, such as :PROTOCOL://LOCAL_IP:LOCAL_PORT@[CLIENT_KEY]CLIENT_LOCAL_HOST:CLIENT_LOCAL_PORT").Short('r').Default("").Strings()
|
||||
tunnelServerArgs.Mux = tunnelServer.Flag("mux", "use multiplexing mode").Default("false").Bool()
|
||||
tunnelServerArgs.Route = tunnelServer.Flag("route", "local route to client's network, such as: PROTOCOL://LOCAL_IP:LOCAL_PORT@[CLIENT_KEY]CLIENT_LOCAL_HOST:CLIENT_LOCAL_PORT").Short('r').Default("").Strings()
|
||||
|
||||
//########tunnel-client#########
|
||||
tunnelClient := app.Command("tclient", "proxy on tunnel client mode")
|
||||
@ -113,7 +173,6 @@ func initConfig() (err error) {
|
||||
tunnelClientArgs.KeyFile = tunnelClient.Flag("key", "key file for tls").Short('K').Default("proxy.key").String()
|
||||
tunnelClientArgs.Timeout = tunnelClient.Flag("timeout", "tcp timeout with milliseconds").Short('t').Default("2000").Int()
|
||||
tunnelClientArgs.Key = tunnelClient.Flag("k", "key same with server").Default("default").String()
|
||||
tunnelClientArgs.Mux = tunnelClient.Flag("mux", "use multiplexing mode").Default("false").Bool()
|
||||
|
||||
//########tunnel-bridge#########
|
||||
tunnelBridge := app.Command("tbridge", "proxy on tunnel bridge mode")
|
||||
@ -121,7 +180,6 @@ func initConfig() (err error) {
|
||||
tunnelBridgeArgs.KeyFile = tunnelBridge.Flag("key", "key file for tls").Short('K').Default("proxy.key").String()
|
||||
tunnelBridgeArgs.Timeout = tunnelBridge.Flag("timeout", "tcp timeout with milliseconds").Short('t').Default("2000").Int()
|
||||
tunnelBridgeArgs.Local = tunnelBridge.Flag("local", "local ip:port to listen").Short('p').Default(":33080").String()
|
||||
tunnelBridgeArgs.Mux = tunnelBridge.Flag("mux", "use multiplexing mode").Default("false").Bool()
|
||||
|
||||
//########ssh#########
|
||||
socks := app.Command("socks", "proxy on ssh mode")
|
||||
@ -132,6 +190,7 @@ func initConfig() (err error) {
|
||||
socksArgs.UDPParent = socks.Flag("udp-parent", "udp parent address, such as: \"23.32.32.19:33090\"").Default("").Short('X').String()
|
||||
socksArgs.UDPLocal = socks.Flag("udp-local", "udp local ip:port to listen").Short('x').Default(":33090").String()
|
||||
socksArgs.CertFile = socks.Flag("cert", "cert file for tls").Short('C').Default("proxy.crt").String()
|
||||
socksArgs.CaCertFile = socks.Flag("ca", "ca cert file for tls").Default("").String()
|
||||
socksArgs.KeyFile = socks.Flag("key", "key file for tls").Short('K').Default("proxy.key").String()
|
||||
socksArgs.SSHUser = socks.Flag("ssh-user", "user for ssh").Short('u').Default("").String()
|
||||
socksArgs.SSHKeyFile = socks.Flag("ssh-key", "private key file for ssh").Short('S').Default("").String()
|
||||
@ -144,29 +203,83 @@ func initConfig() (err error) {
|
||||
socksArgs.Direct = socks.Flag("direct", "direct domain file , one domain each line").Default("direct").Short('d').String()
|
||||
socksArgs.AuthFile = socks.Flag("auth-file", "http basic auth file,\"username:password\" each line in file").Short('F').String()
|
||||
socksArgs.Auth = socks.Flag("auth", "socks auth username and password, mutiple user repeat -a ,such as: -a user1:pass1 -a user2:pass2").Short('a').Strings()
|
||||
socksArgs.KCPKey = socks.Flag("kcp-key", "key for kcp encrypt/decrypt data").Short('B').Default("encrypt").String()
|
||||
socksArgs.KCPMethod = socks.Flag("kcp-method", "kcp encrypt/decrypt method").Short('M').Default("3des").String()
|
||||
socksArgs.LocalIPS = socks.Flag("local bind ips", "if your host behind a nat,set your public ip here avoid dead loop").Short('g').Strings()
|
||||
socksArgs.AuthURL = socks.Flag("auth-url", "auth username and password will send to this url,response http code equal to 'auth-code' means ok,others means fail.").Default("").String()
|
||||
socksArgs.AuthURLTimeout = socks.Flag("auth-timeout", "access 'auth-url' timeout milliseconds").Default("3000").Int()
|
||||
socksArgs.AuthURLOkCode = socks.Flag("auth-code", "access 'auth-url' success http code").Default("204").Int()
|
||||
socksArgs.AuthURLRetry = socks.Flag("auth-retry", "access 'auth-url' fail and retry count").Default("0").Int()
|
||||
socksArgs.DNSAddress = socks.Flag("dns-address", "if set this, proxy will use this dns for resolve doamin").Short('q').Default("").String()
|
||||
socksArgs.DNSTTL = socks.Flag("dns-ttl", "caching seconds of dns query result").Short('e').Default("300").Int()
|
||||
//########socks+http(s)#########
|
||||
sps := app.Command("sps", "proxy on socks+http(s) mode")
|
||||
spsArgs.Parent = sps.Flag("parent", "parent address, such as: \"23.32.32.19:28008\"").Default("").Short('P').String()
|
||||
spsArgs.CertFile = sps.Flag("cert", "cert file for tls").Short('C').Default("proxy.crt").String()
|
||||
spsArgs.KeyFile = sps.Flag("key", "key file for tls").Short('K').Default("proxy.key").String()
|
||||
spsArgs.CaCertFile = sps.Flag("ca", "ca cert file for tls").Default("").String()
|
||||
spsArgs.Timeout = sps.Flag("timeout", "tcp timeout milliseconds when connect to real server or parent proxy").Short('i').Default("2000").Int()
|
||||
spsArgs.ParentType = sps.Flag("parent-type", "parent protocol type <tls|tcp|kcp>").Short('T').Enum("tls", "tcp", "kcp")
|
||||
spsArgs.LocalType = sps.Flag("local-type", "local protocol type <tls|tcp|kcp>").Default("tcp").Short('t').Enum("tls", "tcp", "kcp")
|
||||
spsArgs.Local = sps.Flag("local", "local ip:port to listen,multiple address use comma split,such as: 0.0.0.0:80,0.0.0.0:443").Short('p').Default(":33080").String()
|
||||
spsArgs.ParentServiceType = sps.Flag("parent-service-type", "parent service type <http|socks>").Short('S').Enum("http", "socks")
|
||||
spsArgs.DNSAddress = sps.Flag("dns-address", "if set this, proxy will use this dns for resolve doamin").Short('q').Default("").String()
|
||||
spsArgs.DNSTTL = sps.Flag("dns-ttl", "caching seconds of dns query result").Short('e').Default("300").Int()
|
||||
|
||||
//parse args
|
||||
serviceName := kingpin.MustParse(app.Parse(os.Args[1:]))
|
||||
flags := log.Ldate
|
||||
if *daemon {
|
||||
args := []string{}
|
||||
for _, arg := range os.Args[1:] {
|
||||
if arg != "--daemon" {
|
||||
args = append(args, arg)
|
||||
}
|
||||
}
|
||||
cmd := exec.Command(os.Args[0], args...)
|
||||
cmd.Start()
|
||||
fmt.Printf("%s [PID] %d running...\n", os.Args[0], cmd.Process.Pid)
|
||||
os.Exit(0)
|
||||
|
||||
//set kcp config
|
||||
|
||||
switch *kcpArgs.Mode {
|
||||
case "normal":
|
||||
*kcpArgs.NoDelay, *kcpArgs.Interval, *kcpArgs.Resend, *kcpArgs.NoCongestion = 0, 40, 2, 1
|
||||
case "fast":
|
||||
*kcpArgs.NoDelay, *kcpArgs.Interval, *kcpArgs.Resend, *kcpArgs.NoCongestion = 0, 30, 2, 1
|
||||
case "fast2":
|
||||
*kcpArgs.NoDelay, *kcpArgs.Interval, *kcpArgs.Resend, *kcpArgs.NoCongestion = 1, 20, 2, 1
|
||||
case "fast3":
|
||||
*kcpArgs.NoDelay, *kcpArgs.Interval, *kcpArgs.Resend, *kcpArgs.NoCongestion = 1, 10, 2, 1
|
||||
}
|
||||
pass := pbkdf2.Key([]byte(*kcpArgs.Key), []byte("snail007-goproxy"), 4096, 32, sha1.New)
|
||||
|
||||
switch *kcpArgs.Crypt {
|
||||
case "sm4":
|
||||
kcpArgs.Block, _ = kcp.NewSM4BlockCrypt(pass[:16])
|
||||
case "tea":
|
||||
kcpArgs.Block, _ = kcp.NewTEABlockCrypt(pass[:16])
|
||||
case "xor":
|
||||
kcpArgs.Block, _ = kcp.NewSimpleXORBlockCrypt(pass)
|
||||
case "none":
|
||||
kcpArgs.Block, _ = kcp.NewNoneBlockCrypt(pass)
|
||||
case "aes-128":
|
||||
kcpArgs.Block, _ = kcp.NewAESBlockCrypt(pass[:16])
|
||||
case "aes-192":
|
||||
kcpArgs.Block, _ = kcp.NewAESBlockCrypt(pass[:24])
|
||||
case "blowfish":
|
||||
kcpArgs.Block, _ = kcp.NewBlowfishBlockCrypt(pass)
|
||||
case "twofish":
|
||||
kcpArgs.Block, _ = kcp.NewTwofishBlockCrypt(pass)
|
||||
case "cast5":
|
||||
kcpArgs.Block, _ = kcp.NewCast5BlockCrypt(pass[:16])
|
||||
case "3des":
|
||||
kcpArgs.Block, _ = kcp.NewTripleDESBlockCrypt(pass[:24])
|
||||
case "xtea":
|
||||
kcpArgs.Block, _ = kcp.NewXTEABlockCrypt(pass[:16])
|
||||
case "salsa20":
|
||||
kcpArgs.Block, _ = kcp.NewSalsa20BlockCrypt(pass)
|
||||
default:
|
||||
*kcpArgs.Crypt = "aes"
|
||||
kcpArgs.Block, _ = kcp.NewAESBlockCrypt(pass)
|
||||
}
|
||||
//attach kcp config
|
||||
tcpArgs.KCP = kcpArgs
|
||||
httpArgs.KCP = kcpArgs
|
||||
socksArgs.KCP = kcpArgs
|
||||
spsArgs.KCP = kcpArgs
|
||||
muxBridgeArgs.KCP = kcpArgs
|
||||
muxServerArgs.KCP = kcpArgs
|
||||
muxClientArgs.KCP = kcpArgs
|
||||
|
||||
flags := log.Ldate
|
||||
if *debug {
|
||||
flags |= log.Lshortfile | log.Lmicroseconds
|
||||
} else {
|
||||
@ -180,7 +293,75 @@ func initConfig() (err error) {
|
||||
log.Fatal(e)
|
||||
}
|
||||
log.SetOutput(f)
|
||||
} else {
|
||||
}
|
||||
if *daemon {
|
||||
args := []string{}
|
||||
for _, arg := range os.Args[1:] {
|
||||
if arg != "--daemon" {
|
||||
args = append(args, arg)
|
||||
}
|
||||
}
|
||||
cmd = exec.Command(os.Args[0], args...)
|
||||
cmd.Start()
|
||||
f := ""
|
||||
if *forever {
|
||||
f = "forever "
|
||||
}
|
||||
log.Printf("%s%s [PID] %d running...\n", f, os.Args[0], cmd.Process.Pid)
|
||||
os.Exit(0)
|
||||
}
|
||||
if *forever {
|
||||
args := []string{}
|
||||
for _, arg := range os.Args[1:] {
|
||||
if arg != "--forever" {
|
||||
args = append(args, arg)
|
||||
}
|
||||
}
|
||||
go func() {
|
||||
for {
|
||||
if cmd != nil {
|
||||
cmd.Process.Kill()
|
||||
}
|
||||
cmd = exec.Command(os.Args[0], args...)
|
||||
cmdReaderStderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
log.Printf("ERR:%s,restarting...\n", err)
|
||||
continue
|
||||
}
|
||||
cmdReader, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
log.Printf("ERR:%s,restarting...\n", err)
|
||||
continue
|
||||
}
|
||||
scanner := bufio.NewScanner(cmdReader)
|
||||
scannerStdErr := bufio.NewScanner(cmdReaderStderr)
|
||||
go func() {
|
||||
for scanner.Scan() {
|
||||
fmt.Println(scanner.Text())
|
||||
}
|
||||
}()
|
||||
go func() {
|
||||
for scannerStdErr.Scan() {
|
||||
fmt.Println(scannerStdErr.Text())
|
||||
}
|
||||
}()
|
||||
if err := cmd.Start(); err != nil {
|
||||
log.Printf("ERR:%s,restarting...\n", err)
|
||||
continue
|
||||
}
|
||||
pid := cmd.Process.Pid
|
||||
log.Printf("worker %s [PID] %d running...\n", os.Args[0], pid)
|
||||
if err := cmd.Wait(); err != nil {
|
||||
log.Printf("ERR:%s,restarting...", err)
|
||||
continue
|
||||
}
|
||||
log.Printf("worker %s [PID] %d unexpected exited, restarting...\n", os.Args[0], pid)
|
||||
time.Sleep(time.Second * 5)
|
||||
}
|
||||
}()
|
||||
return
|
||||
}
|
||||
if *logfile == "" {
|
||||
poster()
|
||||
}
|
||||
//regist services and run service
|
||||
@ -190,7 +371,11 @@ func initConfig() (err error) {
|
||||
services.Regist("tserver", services.NewTunnelServerManager(), tunnelServerArgs)
|
||||
services.Regist("tclient", services.NewTunnelClient(), tunnelClientArgs)
|
||||
services.Regist("tbridge", services.NewTunnelBridge(), tunnelBridgeArgs)
|
||||
services.Regist("server", services.NewMuxServerManager(), muxServerArgs)
|
||||
services.Regist("client", services.NewMuxClient(), muxClientArgs)
|
||||
services.Regist("bridge", services.NewMuxBridge(), muxBridgeArgs)
|
||||
services.Regist("socks", services.NewSocks(), socksArgs)
|
||||
services.Regist("sps", services.NewSPS(), spsArgs)
|
||||
service, err = services.Run(serviceName)
|
||||
if err != nil {
|
||||
log.Fatalf("run service [%s] fail, ERR:%s", serviceName, err)
|
||||
|
||||
BIN
docs/images/1.1.jpg
Normal file
BIN
docs/images/1.1.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 94 KiB |
BIN
docs/images/2.1.png
Normal file
BIN
docs/images/2.1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 26 KiB |
BIN
docs/images/2.2.png
Normal file
BIN
docs/images/2.2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 32 KiB |
BIN
docs/images/5.2.png
Normal file
BIN
docs/images/5.2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 12 KiB |
BIN
docs/images/alipay.jpg
Normal file
BIN
docs/images/alipay.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 39 KiB |
BIN
docs/images/wxpay.jpg
Normal file
BIN
docs/images/wxpay.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 24 KiB |
@ -1,12 +1,6 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# install monexec
|
||||
tar zxvf monexec_0.1.1_linux_amd64.tar.gz
|
||||
cd monexec_0.1.1_linux_amd64
|
||||
cp monexec /usr/bin/
|
||||
chmod +x /usr/bin/monexec
|
||||
cd ..
|
||||
# #install proxy
|
||||
tar zxvf proxy-linux-amd64.tar.gz
|
||||
cp proxy /usr/bin/
|
||||
|
||||
@ -5,15 +5,8 @@ if [ -e /tmp/proxy ]; then
|
||||
fi
|
||||
mkdir /tmp/proxy
|
||||
cd /tmp/proxy
|
||||
wget https://github.com/reddec/monexec/releases/download/v0.1.1/monexec_0.1.1_linux_amd64.tar.gz
|
||||
wget https://github.com/snail007/goproxy/releases/download/v3.8/proxy-linux-amd64.tar.gz
|
||||
wget https://github.com/snail007/goproxy/releases/download/v4.5/proxy-linux-amd64.tar.gz
|
||||
|
||||
# install monexec
|
||||
tar zxvf monexec_0.1.1_linux_amd64.tar.gz
|
||||
cd monexec_0.1.1_linux_amd64
|
||||
cp monexec /usr/bin/
|
||||
chmod +x /usr/bin/monexec
|
||||
cd ..
|
||||
# #install proxy
|
||||
tar zxvf proxy-linux-amd64.tar.gz
|
||||
cp proxy /usr/bin/
|
||||
|
||||
21
main.go
21
main.go
@ -1,22 +1,25 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"proxy/services"
|
||||
"snail007/proxy/services"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
const APP_VERSION = "3.8"
|
||||
const APP_VERSION = "4.5"
|
||||
|
||||
func main() {
|
||||
err := initConfig()
|
||||
if err != nil {
|
||||
log.Fatalf("err : %s", err)
|
||||
}
|
||||
Clean(&service.S)
|
||||
if service != nil && service.S != nil {
|
||||
Clean(&service.S)
|
||||
} else {
|
||||
Clean(nil)
|
||||
}
|
||||
}
|
||||
func Clean(s *services.Service) {
|
||||
signalChan := make(chan os.Signal, 1)
|
||||
@ -29,8 +32,14 @@ func Clean(s *services.Service) {
|
||||
syscall.SIGQUIT)
|
||||
go func() {
|
||||
for _ = range signalChan {
|
||||
fmt.Println("\nReceived an interrupt, stopping services...")
|
||||
(*s).Clean()
|
||||
log.Println("Received an interrupt, stopping services...")
|
||||
if s != nil && *s != nil {
|
||||
(*s).Clean()
|
||||
}
|
||||
if cmd != nil {
|
||||
log.Printf("clean process %d", cmd.Process.Pid)
|
||||
cmd.Process.Kill()
|
||||
}
|
||||
cleanupDone <- true
|
||||
}
|
||||
}()
|
||||
|
||||
83
release.sh
83
release.sh
@ -1,5 +1,5 @@
|
||||
#!/bin/bash
|
||||
VER="3.8"
|
||||
VER="4.5"
|
||||
RELEASE="release-${VER}"
|
||||
rm -rf .cert
|
||||
mkdir .cert
|
||||
@ -9,56 +9,59 @@ cd .cert
|
||||
cd ..
|
||||
rm -rf ${RELEASE}
|
||||
mkdir ${RELEASE}
|
||||
export CGO_ENABLED=0
|
||||
#linux
|
||||
GOOS=linux GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-linux-386.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-linux-amd64.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=arm GOARM=7 go build && tar zcfv "${RELEASE}/proxy-linux-arm.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=arm64 GOARM=7 go build && tar zcfv "${RELEASE}/proxy-linux-arm64.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=mips go build && tar zcfv "${RELEASE}/proxy-linux-mips.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=mips64 go build && tar zcfv "${RELEASE}/proxy-linux-mips64.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=mips64le go build && tar zcfv "${RELEASE}/proxy-linux-mips64le.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=mipsle go build && tar zcfv "${RELEASE}/proxy-linux-mipsle.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=ppc64 go build && tar zcfv "${RELEASE}/proxy-linux-ppc64.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=ppc64le go build && tar zcfv "${RELEASE}/proxy-linux-ppc64le.tar.gz" proxy direct blocked
|
||||
GOOS=linux GOARCH=s390x go build && tar zcfv "${RELEASE}/proxy-linux-s390x.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-linux-386.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-linux-amd64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=arm GOARM=6 go build && tar zcfv "${RELEASE}/proxy-linux-arm-v6.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 GOARM=6 go build && tar zcfv "${RELEASE}/proxy-linux-arm64-v6.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=arm GOARM=7 go build && tar zcfv "${RELEASE}/proxy-linux-arm-v7.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 GOARM=7 go build && tar zcfv "${RELEASE}/proxy-linux-arm64-v7.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=arm GOARM=5 go build && tar zcfv "${RELEASE}/proxy-linux-arm-v5.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 GOARM=5 go build && tar zcfv "${RELEASE}/proxy-linux-arm64-v5.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=mips go build && tar zcfv "${RELEASE}/proxy-linux-mips.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=mips64 go build && tar zcfv "${RELEASE}/proxy-linux-mips64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=mips64le go build && tar zcfv "${RELEASE}/proxy-linux-mips64le.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=mipsle go build && tar zcfv "${RELEASE}/proxy-linux-mipsle.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=ppc64 go build && tar zcfv "${RELEASE}/proxy-linux-ppc64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le go build && tar zcfv "${RELEASE}/proxy-linux-ppc64le.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=s390x go build && tar zcfv "${RELEASE}/proxy-linux-s390x.tar.gz" proxy direct blocked
|
||||
#android
|
||||
GOOS=android GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-android-386.tar.gz" proxy direct blocked
|
||||
GOOS=android GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-android-amd64.tar.gz" proxy direct blocked
|
||||
GOOS=android GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-android-arm.tar.gz" proxy direct blocked
|
||||
GOOS=android GOARCH=arm64 go build && tar zcfv "${RELEASE}/proxy-android-arm64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=android GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-android-386.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=android GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-android-amd64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=android GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-android-arm.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=android GOARCH=arm64 go build && tar zcfv "${RELEASE}/proxy-android-arm64.tar.gz" proxy direct blocked
|
||||
#darwin
|
||||
GOOS=darwin GOARCH=386 go build go build && tar zcfv "${RELEASE}/proxy-darwin-386.tar.gz" proxy direct blocked
|
||||
GOOS=darwin GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-darwin-amd64.tar.gz" proxy direct blocked
|
||||
GOOS=darwin GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-darwin-arm.tar.gz" proxy direct blocked
|
||||
GOOS=darwin GOARCH=arm64 go build && tar zcfv "${RELEASE}/proxy-darwin-arm64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=darwin GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-darwin-386.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-darwin-amd64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=darwin GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-darwin-arm.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 go build && tar zcfv "${RELEASE}/proxy-darwin-arm64.tar.gz" proxy direct blocked
|
||||
#dragonfly
|
||||
GOOS=dragonfly GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-dragonfly-amd64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=dragonfly GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-dragonfly-amd64.tar.gz" proxy direct blocked
|
||||
#freebsd
|
||||
GOOS=freebsd GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-freebsd-386.tar.gz" proxy direct blocked
|
||||
GOOS=freebsd GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-freebsd-amd64.tar.gz" proxy direct blocked
|
||||
GOOS=freebsd GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-freebsd-arm.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=freebsd GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-freebsd-386.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-freebsd-amd64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=freebsd GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-freebsd-arm.tar.gz" proxy direct blocked
|
||||
#nacl
|
||||
GOOS=nacl GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-nacl-386.tar.gz" proxy direct blocked
|
||||
GOOS=nacl GOARCH=amd64p32 go build && tar zcfv "${RELEASE}/proxy-nacl-amd64p32.tar.gz" proxy direct blocked
|
||||
GOOS=nacl GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-nacl-arm.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=nacl GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-nacl-386.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=nacl GOARCH=amd64p32 go build && tar zcfv "${RELEASE}/proxy-nacl-amd64p32.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=nacl GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-nacl-arm.tar.gz" proxy direct blocked
|
||||
#netbsd
|
||||
GOOS=netbsd GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-netbsd-386.tar.gz" proxy direct blocked
|
||||
GOOS=netbsd GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-netbsd-amd64.tar.gz" proxy direct blocked
|
||||
GOOS=netbsd GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-netbsd-arm.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=netbsd GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-netbsd-386.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=netbsd GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-netbsd-amd64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=netbsd GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-netbsd-arm.tar.gz" proxy direct blocked
|
||||
#openbsd
|
||||
GOOS=openbsd GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-openbsd-386.tar.gz" proxy direct blocked
|
||||
GOOS=openbsd GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-openbsd-amd64.tar.gz" proxy direct blocked
|
||||
GOOS=openbsd GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-openbsd-arm.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=openbsd GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-openbsd-386.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=openbsd GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-openbsd-amd64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=openbsd GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-openbsd-arm.tar.gz" proxy direct blocked
|
||||
#plan9
|
||||
GOOS=plan9 GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-plan9-386.tar.gz" proxy direct blocked
|
||||
GOOS=plan9 GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-plan9-amd64.tar.gz" proxy direct blocked
|
||||
GOOS=plan9 GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-plan9-arm.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=plan9 GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-plan9-386.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=plan9 GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-plan9-amd64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=plan9 GOARCH=arm go build && tar zcfv "${RELEASE}/proxy-plan9-arm.tar.gz" proxy direct blocked
|
||||
#solaris
|
||||
GOOS=solaris GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-solaris-amd64.tar.gz" proxy direct blocked
|
||||
CGO_ENABLED=0 GOOS=solaris GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-solaris-amd64.tar.gz" proxy direct blocked
|
||||
#windows
|
||||
GOOS=windows GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-windows-386.tar.gz" proxy.exe direct blocked .cert/proxy.crt .cert/proxy.key
|
||||
GOOS=windows GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-windows-amd64.tar.gz" proxy.exe direct blocked .cert/proxy.crt .cert/proxy.key
|
||||
CGO_ENABLED=0 GOOS=windows GOARCH=386 go build && tar zcfv "${RELEASE}/proxy-windows-386.tar.gz" proxy.exe direct blocked .cert/proxy.crt .cert/proxy.key
|
||||
CGO_ENABLED=0 GOOS=windows GOARCH=amd64 go build && tar zcfv "${RELEASE}/proxy-windows-amd64.tar.gz" proxy.exe direct blocked .cert/proxy.crt .cert/proxy.key
|
||||
|
||||
rm -rf proxy proxy.exe .cert
|
||||
|
||||
|
||||
@ -1,6 +1,10 @@
|
||||
package services
|
||||
|
||||
import "golang.org/x/crypto/ssh"
|
||||
import (
|
||||
"snail007/proxy/services/kcpcfg"
|
||||
|
||||
"golang.org/x/crypto/ssh"
|
||||
)
|
||||
|
||||
// tcp := app.Command("tcp", "proxy on tcp mode")
|
||||
// t := tcp.Flag("tcp-timeout", "tcp timeout milliseconds when connect to real server or parent proxy").Default("2000").Int()
|
||||
@ -20,6 +24,48 @@ const (
|
||||
CONN_CLIENT_MUX = uint8(7)
|
||||
)
|
||||
|
||||
type MuxServerArgs struct {
|
||||
Parent *string
|
||||
ParentType *string
|
||||
CertFile *string
|
||||
KeyFile *string
|
||||
CertBytes []byte
|
||||
KeyBytes []byte
|
||||
Local *string
|
||||
IsUDP *bool
|
||||
Key *string
|
||||
Remote *string
|
||||
Timeout *int
|
||||
Route *[]string
|
||||
Mgr *MuxServerManager
|
||||
IsCompress *bool
|
||||
SessionCount *int
|
||||
KCP kcpcfg.KCPConfigArgs
|
||||
}
|
||||
type MuxClientArgs struct {
|
||||
Parent *string
|
||||
ParentType *string
|
||||
CertFile *string
|
||||
KeyFile *string
|
||||
CertBytes []byte
|
||||
KeyBytes []byte
|
||||
Key *string
|
||||
Timeout *int
|
||||
IsCompress *bool
|
||||
SessionCount *int
|
||||
KCP kcpcfg.KCPConfigArgs
|
||||
}
|
||||
type MuxBridgeArgs struct {
|
||||
CertFile *string
|
||||
KeyFile *string
|
||||
CertBytes []byte
|
||||
KeyBytes []byte
|
||||
Local *string
|
||||
LocalType *string
|
||||
Timeout *int
|
||||
IsCompress *bool
|
||||
KCP kcpcfg.KCPConfigArgs
|
||||
}
|
||||
type TunnelServerArgs struct {
|
||||
Parent *string
|
||||
CertFile *string
|
||||
@ -67,14 +113,15 @@ type TCPArgs struct {
|
||||
Timeout *int
|
||||
PoolSize *int
|
||||
CheckParentInterval *int
|
||||
KCPMethod *string
|
||||
KCPKey *string
|
||||
KCP kcpcfg.KCPConfigArgs
|
||||
}
|
||||
|
||||
type HTTPArgs struct {
|
||||
Parent *string
|
||||
CertFile *string
|
||||
KeyFile *string
|
||||
CaCertFile *string
|
||||
CaCertBytes []byte
|
||||
CertBytes []byte
|
||||
KeyBytes []byte
|
||||
Local *string
|
||||
@ -100,9 +147,10 @@ type HTTPArgs struct {
|
||||
SSHUser *string
|
||||
SSHKeyBytes []byte
|
||||
SSHAuthMethod ssh.AuthMethod
|
||||
KCPMethod *string
|
||||
KCPKey *string
|
||||
KCP kcpcfg.KCPConfigArgs
|
||||
LocalIPS *[]string
|
||||
DNSAddress *string
|
||||
DNSTTL *int
|
||||
}
|
||||
type UDPArgs struct {
|
||||
Parent *string
|
||||
@ -123,6 +171,8 @@ type SocksArgs struct {
|
||||
LocalType *string
|
||||
CertFile *string
|
||||
KeyFile *string
|
||||
CaCertFile *string
|
||||
CaCertBytes []byte
|
||||
CertBytes []byte
|
||||
KeyBytes []byte
|
||||
SSHKeyFile *string
|
||||
@ -142,13 +192,42 @@ type SocksArgs struct {
|
||||
AuthURLOkCode *int
|
||||
AuthURLTimeout *int
|
||||
AuthURLRetry *int
|
||||
KCPMethod *string
|
||||
KCPKey *string
|
||||
KCP kcpcfg.KCPConfigArgs
|
||||
UDPParent *string
|
||||
UDPLocal *string
|
||||
LocalIPS *[]string
|
||||
DNSAddress *string
|
||||
DNSTTL *int
|
||||
}
|
||||
type SPSArgs struct {
|
||||
Parent *string
|
||||
CertFile *string
|
||||
KeyFile *string
|
||||
CaCertFile *string
|
||||
CaCertBytes []byte
|
||||
CertBytes []byte
|
||||
KeyBytes []byte
|
||||
Local *string
|
||||
ParentType *string
|
||||
LocalType *string
|
||||
Timeout *int
|
||||
KCP kcpcfg.KCPConfigArgs
|
||||
ParentServiceType *string
|
||||
DNSAddress *string
|
||||
DNSTTL *int
|
||||
}
|
||||
|
||||
func (a *SPSArgs) Protocol() string {
|
||||
switch *a.LocalType {
|
||||
case TYPE_TLS:
|
||||
return TYPE_TLS
|
||||
case TYPE_TCP:
|
||||
return TYPE_TCP
|
||||
case TYPE_KCP:
|
||||
return TYPE_KCP
|
||||
}
|
||||
return "unknown"
|
||||
}
|
||||
func (a *TCPArgs) Protocol() string {
|
||||
switch *a.LocalType {
|
||||
case TYPE_TLS:
|
||||
|
||||
117
services/http.go
117
services/http.go
@ -6,21 +6,23 @@ import (
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net"
|
||||
"proxy/utils"
|
||||
"runtime/debug"
|
||||
"snail007/proxy/utils"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"golang.org/x/crypto/ssh"
|
||||
)
|
||||
|
||||
type HTTP struct {
|
||||
outPool utils.OutPool
|
||||
cfg HTTPArgs
|
||||
checker utils.Checker
|
||||
basicAuth utils.BasicAuth
|
||||
sshClient *ssh.Client
|
||||
lockChn chan bool
|
||||
outPool utils.OutPool
|
||||
cfg HTTPArgs
|
||||
checker utils.Checker
|
||||
basicAuth utils.BasicAuth
|
||||
sshClient *ssh.Client
|
||||
lockChn chan bool
|
||||
domainResolver utils.DomainResolver
|
||||
}
|
||||
|
||||
func NewHTTP() Service {
|
||||
@ -35,10 +37,16 @@ func NewHTTP() Service {
|
||||
func (s *HTTP) CheckArgs() {
|
||||
var err error
|
||||
if *s.cfg.Parent != "" && *s.cfg.ParentType == "" {
|
||||
log.Fatalf("parent type unkown,use -T <tls|tcp|ssh>")
|
||||
log.Fatalf("parent type unkown,use -T <tls|tcp|ssh|kcp>")
|
||||
}
|
||||
if *s.cfg.ParentType == "tls" || *s.cfg.LocalType == "tls" {
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes = utils.TlsBytes(*s.cfg.CertFile, *s.cfg.KeyFile)
|
||||
if *s.cfg.CaCertFile != "" {
|
||||
s.cfg.CaCertBytes, err = ioutil.ReadFile(*s.cfg.CaCertFile)
|
||||
if err != nil {
|
||||
log.Fatalf("read ca file error,ERR:%s", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
if *s.cfg.ParentType == "ssh" {
|
||||
if *s.cfg.SSHUser == "" {
|
||||
@ -73,6 +81,9 @@ func (s *HTTP) InitService() {
|
||||
if *s.cfg.Parent != "" {
|
||||
s.checker = utils.NewChecker(*s.cfg.HTTPTimeout, int64(*s.cfg.Interval), *s.cfg.Blocked, *s.cfg.Direct)
|
||||
}
|
||||
if *s.cfg.DNSAddress != "" {
|
||||
(*s).domainResolver = utils.NewDomainResolver(*s.cfg.DNSAddress, *s.cfg.DNSTTL)
|
||||
}
|
||||
if *s.cfg.ParentType == "ssh" {
|
||||
err := s.ConnectSSH()
|
||||
if err != nil {
|
||||
@ -81,7 +92,7 @@ func (s *HTTP) InitService() {
|
||||
go func() {
|
||||
//循环检查ssh网络连通性
|
||||
for {
|
||||
conn, err := utils.ConnectHost(*s.cfg.Parent, *s.cfg.Timeout*2)
|
||||
conn, err := utils.ConnectHost(s.Resolve(*s.cfg.Parent), *s.cfg.Timeout*2)
|
||||
if err == nil {
|
||||
_, err = conn.Write([]byte{0})
|
||||
}
|
||||
@ -115,20 +126,24 @@ func (s *HTTP) Start(args interface{}) (err error) {
|
||||
s.InitOutConnPool()
|
||||
}
|
||||
s.InitService()
|
||||
host, port, _ := net.SplitHostPort(*s.cfg.Local)
|
||||
p, _ := strconv.Atoi(port)
|
||||
sc := utils.NewServerChannel(host, p)
|
||||
if *s.cfg.LocalType == TYPE_TCP {
|
||||
err = sc.ListenTCP(s.callback)
|
||||
} else if *s.cfg.LocalType == TYPE_TLS {
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, s.callback)
|
||||
} else if *s.cfg.LocalType == TYPE_KCP {
|
||||
err = sc.ListenKCP(*s.cfg.KCPMethod, *s.cfg.KCPKey, s.callback)
|
||||
for _, addr := range strings.Split(*s.cfg.Local, ",") {
|
||||
if addr != "" {
|
||||
host, port, _ := net.SplitHostPort(addr)
|
||||
p, _ := strconv.Atoi(port)
|
||||
sc := utils.NewServerChannel(host, p)
|
||||
if *s.cfg.LocalType == TYPE_TCP {
|
||||
err = sc.ListenTCP(s.callback)
|
||||
} else if *s.cfg.LocalType == TYPE_TLS {
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, s.cfg.CaCertBytes, s.callback)
|
||||
} else if *s.cfg.LocalType == TYPE_KCP {
|
||||
err = sc.ListenKCP(s.cfg.KCP, s.callback)
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
log.Printf("%s http(s) proxy on %s", *s.cfg.LocalType, (*sc.Listener).Addr())
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
log.Printf("%s http(s) proxy on %s", *s.cfg.LocalType, (*sc.Listener).Addr())
|
||||
return
|
||||
}
|
||||
|
||||
@ -152,22 +167,23 @@ func (s *HTTP) callback(inConn net.Conn) {
|
||||
return
|
||||
}
|
||||
address := req.Host
|
||||
|
||||
useProxy := true
|
||||
if *s.cfg.Parent == "" {
|
||||
useProxy = false
|
||||
} else if *s.cfg.Always {
|
||||
host, _, _ := net.SplitHostPort(address)
|
||||
useProxy := false
|
||||
if !utils.IsIternalIP(host) {
|
||||
useProxy = true
|
||||
} else {
|
||||
if req.IsHTTPS() {
|
||||
s.checker.Add(address, true, req.Method, "", nil)
|
||||
if *s.cfg.Parent == "" {
|
||||
useProxy = false
|
||||
} else if *s.cfg.Always {
|
||||
useProxy = true
|
||||
} else {
|
||||
s.checker.Add(address, false, req.Method, req.URL, req.HeadBuf)
|
||||
k := s.Resolve(address)
|
||||
s.checker.Add(k)
|
||||
//var n, m uint
|
||||
useProxy, _, _ = s.checker.IsBlocked(k)
|
||||
//log.Printf("blocked ? : %v, %s , fail:%d ,success:%d", useProxy, address, n, m)
|
||||
}
|
||||
//var n, m uint
|
||||
useProxy, _, _ = s.checker.IsBlocked(req.Host)
|
||||
//log.Printf("blocked ? : %v, %s , fail:%d ,success:%d", useProxy, address, n, m)
|
||||
}
|
||||
|
||||
log.Printf("use proxy : %v, %s", useProxy, address)
|
||||
|
||||
err = s.OutToTCP(useProxy, address, &inConn, &req)
|
||||
@ -206,7 +222,7 @@ func (s *HTTP) OutToTCP(useProxy bool, address string, inConn *net.Conn, req *ut
|
||||
}
|
||||
}
|
||||
} else {
|
||||
outConn, err = utils.ConnectHost(address, *s.cfg.Timeout)
|
||||
outConn, err = utils.ConnectHost(s.Resolve(address), *s.cfg.Timeout)
|
||||
}
|
||||
tryCount++
|
||||
if err == nil || tryCount > maxTryCount {
|
||||
@ -299,7 +315,7 @@ func (s *HTTP) ConnectSSH() (err error) {
|
||||
if s.sshClient != nil {
|
||||
s.sshClient.Close()
|
||||
}
|
||||
s.sshClient, err = ssh.Dial("tcp", *s.cfg.Parent, &config)
|
||||
s.sshClient, err = ssh.Dial("tcp", s.Resolve(*s.cfg.Parent), &config)
|
||||
<-s.lockChn
|
||||
return
|
||||
}
|
||||
@ -310,10 +326,9 @@ func (s *HTTP) InitOutConnPool() {
|
||||
s.outPool = utils.NewOutPool(
|
||||
*s.cfg.CheckParentInterval,
|
||||
*s.cfg.ParentType,
|
||||
*s.cfg.KCPMethod,
|
||||
*s.cfg.KCPKey,
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes,
|
||||
*s.cfg.Parent,
|
||||
s.cfg.KCP,
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes, s.cfg.CaCertBytes,
|
||||
s.Resolve(*s.cfg.Parent),
|
||||
*s.cfg.Timeout,
|
||||
*s.cfg.PoolSize,
|
||||
*s.cfg.PoolSize*2,
|
||||
@ -321,7 +336,11 @@ func (s *HTTP) InitOutConnPool() {
|
||||
}
|
||||
}
|
||||
func (s *HTTP) InitBasicAuth() (err error) {
|
||||
s.basicAuth = utils.NewBasicAuth()
|
||||
if *s.cfg.DNSAddress != "" {
|
||||
s.basicAuth = utils.NewBasicAuth(&(*s).domainResolver)
|
||||
} else {
|
||||
s.basicAuth = utils.NewBasicAuth(nil)
|
||||
}
|
||||
if *s.cfg.AuthURL != "" {
|
||||
s.basicAuth.SetAuthURL(*s.cfg.AuthURL, *s.cfg.AuthURLOkCode, *s.cfg.AuthURLTimeout, *s.cfg.AuthURLRetry)
|
||||
log.Printf("auth from %s", *s.cfg.AuthURL)
|
||||
@ -355,7 +374,11 @@ func (s *HTTP) IsDeadLoop(inLocalAddr string, host string) bool {
|
||||
}
|
||||
if inPort == outPort {
|
||||
var outIPs []net.IP
|
||||
outIPs, err = net.LookupIP(outDomain)
|
||||
if *s.cfg.DNSAddress != "" {
|
||||
outIPs = []net.IP{net.ParseIP(s.Resolve(outDomain))}
|
||||
} else {
|
||||
outIPs, err = net.LookupIP(outDomain)
|
||||
}
|
||||
if err == nil {
|
||||
for _, ip := range outIPs {
|
||||
if ip.String() == inIP {
|
||||
@ -379,3 +402,13 @@ func (s *HTTP) IsDeadLoop(inLocalAddr string, host string) bool {
|
||||
}
|
||||
return false
|
||||
}
|
||||
func (s *HTTP) Resolve(address string) string {
|
||||
if *s.cfg.DNSAddress == "" {
|
||||
return address
|
||||
}
|
||||
ip, err := s.domainResolver.Resolve(address)
|
||||
if err != nil {
|
||||
log.Printf("dns error %s , ERR:%s", address, err)
|
||||
}
|
||||
return ip
|
||||
}
|
||||
|
||||
24
services/kcpcfg/args.go
Normal file
24
services/kcpcfg/args.go
Normal file
@ -0,0 +1,24 @@
|
||||
package kcpcfg
|
||||
|
||||
import kcp "github.com/xtaci/kcp-go"
|
||||
|
||||
type KCPConfigArgs struct {
|
||||
Key *string
|
||||
Crypt *string
|
||||
Mode *string
|
||||
MTU *int
|
||||
SndWnd *int
|
||||
RcvWnd *int
|
||||
DataShard *int
|
||||
ParityShard *int
|
||||
DSCP *int
|
||||
NoComp *bool
|
||||
AckNodelay *bool
|
||||
NoDelay *int
|
||||
Interval *int
|
||||
Resend *int
|
||||
NoCongestion *int
|
||||
SockBuf *int
|
||||
KeepAlive *int
|
||||
Block kcp.BlockCrypt
|
||||
}
|
||||
211
services/mux_bridge.go
Normal file
211
services/mux_bridge.go
Normal file
@ -0,0 +1,211 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"io"
|
||||
"log"
|
||||
"math/rand"
|
||||
"net"
|
||||
"snail007/proxy/utils"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/xtaci/smux"
|
||||
)
|
||||
|
||||
type MuxBridge struct {
|
||||
cfg MuxBridgeArgs
|
||||
clientControlConns utils.ConcurrentMap
|
||||
router utils.ClientKeyRouter
|
||||
l *sync.Mutex
|
||||
}
|
||||
|
||||
func NewMuxBridge() Service {
|
||||
b := &MuxBridge{
|
||||
cfg: MuxBridgeArgs{},
|
||||
clientControlConns: utils.NewConcurrentMap(),
|
||||
l: &sync.Mutex{},
|
||||
}
|
||||
b.router = utils.NewClientKeyRouter(&b.clientControlConns, 50000)
|
||||
return b
|
||||
}
|
||||
|
||||
func (s *MuxBridge) InitService() {
|
||||
|
||||
}
|
||||
func (s *MuxBridge) CheckArgs() {
|
||||
if *s.cfg.CertFile == "" || *s.cfg.KeyFile == "" {
|
||||
log.Fatalf("cert and key file required")
|
||||
}
|
||||
if *s.cfg.LocalType == TYPE_TLS {
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes = utils.TlsBytes(*s.cfg.CertFile, *s.cfg.KeyFile)
|
||||
}
|
||||
}
|
||||
func (s *MuxBridge) StopService() {
|
||||
|
||||
}
|
||||
func (s *MuxBridge) Start(args interface{}) (err error) {
|
||||
s.cfg = args.(MuxBridgeArgs)
|
||||
s.CheckArgs()
|
||||
s.InitService()
|
||||
host, port, _ := net.SplitHostPort(*s.cfg.Local)
|
||||
p, _ := strconv.Atoi(port)
|
||||
sc := utils.NewServerChannel(host, p)
|
||||
if *s.cfg.LocalType == TYPE_TCP {
|
||||
err = sc.ListenTCP(s.handler)
|
||||
} else if *s.cfg.LocalType == TYPE_TLS {
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, nil, s.handler)
|
||||
} else if *s.cfg.LocalType == TYPE_KCP {
|
||||
err = sc.ListenKCP(s.cfg.KCP, s.handler)
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
log.Printf("%s bridge on %s", *s.cfg.LocalType, (*sc.Listener).Addr())
|
||||
return
|
||||
}
|
||||
func (s *MuxBridge) Clean() {
|
||||
s.StopService()
|
||||
}
|
||||
func (s *MuxBridge) handler(inConn net.Conn) {
|
||||
reader := bufio.NewReader(inConn)
|
||||
|
||||
var err error
|
||||
var connType uint8
|
||||
var key string
|
||||
err = utils.ReadPacket(reader, &connType, &key)
|
||||
if err != nil {
|
||||
log.Printf("read error,ERR:%s", err)
|
||||
return
|
||||
}
|
||||
switch connType {
|
||||
case CONN_SERVER:
|
||||
var serverID string
|
||||
err = utils.ReadPacketData(reader, &serverID)
|
||||
if err != nil {
|
||||
log.Printf("read error,ERR:%s", err)
|
||||
return
|
||||
}
|
||||
log.Printf("server connection %s %s connected", serverID, key)
|
||||
session, err := smux.Server(inConn, nil)
|
||||
if err != nil {
|
||||
utils.CloseConn(&inConn)
|
||||
log.Printf("server session error,ERR:%s", err)
|
||||
return
|
||||
}
|
||||
for {
|
||||
stream, err := session.AcceptStream()
|
||||
if err != nil {
|
||||
session.Close()
|
||||
utils.CloseConn(&inConn)
|
||||
return
|
||||
}
|
||||
go s.callback(stream, serverID, key)
|
||||
}
|
||||
case CONN_CLIENT:
|
||||
log.Printf("client connection %s connected", key)
|
||||
session, err := smux.Client(inConn, nil)
|
||||
if err != nil {
|
||||
utils.CloseConn(&inConn)
|
||||
log.Printf("client session error,ERR:%s", err)
|
||||
return
|
||||
}
|
||||
keyInfo := strings.Split(key, "-")
|
||||
if len(keyInfo) != 2 {
|
||||
utils.CloseConn(&inConn)
|
||||
log.Printf("client key format error,key:%s", key)
|
||||
return
|
||||
}
|
||||
groupKey := keyInfo[0]
|
||||
index := keyInfo[1]
|
||||
s.l.Lock()
|
||||
defer s.l.Unlock()
|
||||
if !s.clientControlConns.Has(groupKey) {
|
||||
item := utils.NewConcurrentMap()
|
||||
s.clientControlConns.Set(groupKey, &item)
|
||||
}
|
||||
_group, _ := s.clientControlConns.Get(groupKey)
|
||||
group := _group.(*utils.ConcurrentMap)
|
||||
group.Set(index, session)
|
||||
// s.clientControlConns.Set(key, session)
|
||||
go func() {
|
||||
for {
|
||||
if session.IsClosed() {
|
||||
s.l.Lock()
|
||||
defer s.l.Unlock()
|
||||
if sess, ok := group.Get(index); ok && sess.(*smux.Session).IsClosed() {
|
||||
group.Remove(index)
|
||||
}
|
||||
if group.IsEmpty() {
|
||||
s.clientControlConns.Remove(groupKey)
|
||||
}
|
||||
break
|
||||
}
|
||||
time.Sleep(time.Second * 5)
|
||||
}
|
||||
}()
|
||||
//log.Printf("set client session,key: %s", key)
|
||||
}
|
||||
|
||||
}
|
||||
func (s *MuxBridge) callback(inConn net.Conn, serverID, key string) {
|
||||
try := 20
|
||||
for {
|
||||
try--
|
||||
if try == 0 {
|
||||
break
|
||||
}
|
||||
if key == "*" {
|
||||
key = s.router.GetKey()
|
||||
}
|
||||
_group, ok := s.clientControlConns.Get(key)
|
||||
if !ok {
|
||||
log.Printf("client %s session not exists for server stream %s, retrying...", key, serverID)
|
||||
time.Sleep(time.Second * 3)
|
||||
continue
|
||||
}
|
||||
group := _group.(*utils.ConcurrentMap)
|
||||
keys := group.Keys()
|
||||
keysLen := len(keys)
|
||||
i := 0
|
||||
if keysLen > 0 {
|
||||
i = rand.Intn(keysLen)
|
||||
} else {
|
||||
log.Printf("client %s session empty for server stream %s, retrying...", key, serverID)
|
||||
time.Sleep(time.Second * 3)
|
||||
continue
|
||||
}
|
||||
index := keys[i]
|
||||
log.Printf("select client : %s-%s", key, index)
|
||||
session, _ := group.Get(index)
|
||||
stream, err := session.(*smux.Session).OpenStream()
|
||||
if err != nil {
|
||||
log.Printf("%s client session open stream %s fail, err: %s, retrying...", key, serverID, err)
|
||||
time.Sleep(time.Second * 3)
|
||||
continue
|
||||
} else {
|
||||
log.Printf("stream %s -> %s created", serverID, key)
|
||||
die1 := make(chan bool, 1)
|
||||
die2 := make(chan bool, 1)
|
||||
go func() {
|
||||
io.Copy(stream, inConn)
|
||||
die1 <- true
|
||||
}()
|
||||
go func() {
|
||||
io.Copy(inConn, stream)
|
||||
die2 <- true
|
||||
}()
|
||||
select {
|
||||
case <-die1:
|
||||
case <-die2:
|
||||
}
|
||||
stream.Close()
|
||||
inConn.Close()
|
||||
log.Printf("%s server %s stream released", key, serverID)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
242
services/mux_client.go
Normal file
242
services/mux_client.go
Normal file
@ -0,0 +1,242 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"snail007/proxy/utils"
|
||||
"time"
|
||||
|
||||
"github.com/golang/snappy"
|
||||
"github.com/xtaci/smux"
|
||||
)
|
||||
|
||||
type MuxClient struct {
|
||||
cfg MuxClientArgs
|
||||
}
|
||||
|
||||
func NewMuxClient() Service {
|
||||
return &MuxClient{
|
||||
cfg: MuxClientArgs{},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *MuxClient) InitService() {
|
||||
|
||||
}
|
||||
|
||||
func (s *MuxClient) CheckArgs() {
|
||||
if *s.cfg.Parent != "" {
|
||||
log.Printf("use tls parent %s", *s.cfg.Parent)
|
||||
} else {
|
||||
log.Fatalf("parent required")
|
||||
}
|
||||
if *s.cfg.CertFile == "" || *s.cfg.KeyFile == "" {
|
||||
log.Fatalf("cert and key file required")
|
||||
}
|
||||
if *s.cfg.ParentType == "tls" {
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes = utils.TlsBytes(*s.cfg.CertFile, *s.cfg.KeyFile)
|
||||
}
|
||||
}
|
||||
func (s *MuxClient) StopService() {
|
||||
|
||||
}
|
||||
func (s *MuxClient) Start(args interface{}) (err error) {
|
||||
s.cfg = args.(MuxClientArgs)
|
||||
s.CheckArgs()
|
||||
s.InitService()
|
||||
log.Printf("client started")
|
||||
count := 1
|
||||
if *s.cfg.SessionCount > 0 {
|
||||
count = *s.cfg.SessionCount
|
||||
}
|
||||
for i := 1; i <= count; i++ {
|
||||
log.Printf("session worker[%d] started", i)
|
||||
go func(i int) {
|
||||
defer func() {
|
||||
e := recover()
|
||||
if e != nil {
|
||||
log.Printf("session worker crashed: %s", e)
|
||||
}
|
||||
}()
|
||||
for {
|
||||
conn, err := s.getParentConn()
|
||||
if err != nil {
|
||||
log.Printf("connection err: %s, retrying...", err)
|
||||
time.Sleep(time.Second * 3)
|
||||
continue
|
||||
}
|
||||
_, err = conn.Write(utils.BuildPacket(CONN_CLIENT, fmt.Sprintf("%s-%d", *s.cfg.Key, i)))
|
||||
if err != nil {
|
||||
conn.Close()
|
||||
log.Printf("connection err: %s, retrying...", err)
|
||||
time.Sleep(time.Second * 3)
|
||||
continue
|
||||
}
|
||||
session, err := smux.Server(conn, nil)
|
||||
if err != nil {
|
||||
log.Printf("session err: %s, retrying...", err)
|
||||
conn.Close()
|
||||
time.Sleep(time.Second * 3)
|
||||
continue
|
||||
}
|
||||
for {
|
||||
stream, err := session.AcceptStream()
|
||||
if err != nil {
|
||||
log.Printf("accept stream err: %s, retrying...", err)
|
||||
session.Close()
|
||||
time.Sleep(time.Second * 3)
|
||||
break
|
||||
}
|
||||
go func() {
|
||||
defer func() {
|
||||
e := recover()
|
||||
if e != nil {
|
||||
log.Printf("stream handler crashed: %s", e)
|
||||
}
|
||||
}()
|
||||
var ID, clientLocalAddr, serverID string
|
||||
err = utils.ReadPacketData(stream, &ID, &clientLocalAddr, &serverID)
|
||||
if err != nil {
|
||||
log.Printf("read stream signal err: %s", err)
|
||||
stream.Close()
|
||||
return
|
||||
}
|
||||
log.Printf("worker[%d] signal revecived,server %s stream %s %s", i, serverID, ID, clientLocalAddr)
|
||||
protocol := clientLocalAddr[:3]
|
||||
localAddr := clientLocalAddr[4:]
|
||||
if protocol == "udp" {
|
||||
s.ServeUDP(stream, localAddr, ID)
|
||||
} else {
|
||||
s.ServeConn(stream, localAddr, ID)
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
}(i)
|
||||
}
|
||||
return
|
||||
}
|
||||
func (s *MuxClient) Clean() {
|
||||
s.StopService()
|
||||
}
|
||||
func (s *MuxClient) getParentConn() (conn net.Conn, err error) {
|
||||
if *s.cfg.ParentType == "tls" {
|
||||
var _conn tls.Conn
|
||||
_conn, err = utils.TlsConnectHost(*s.cfg.Parent, *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes, nil)
|
||||
if err == nil {
|
||||
conn = net.Conn(&_conn)
|
||||
}
|
||||
} else if *s.cfg.ParentType == "kcp" {
|
||||
conn, err = utils.ConnectKCPHost(*s.cfg.Parent, s.cfg.KCP)
|
||||
} else {
|
||||
conn, err = utils.ConnectHost(*s.cfg.Parent, *s.cfg.Timeout)
|
||||
}
|
||||
return
|
||||
}
|
||||
func (s *MuxClient) ServeUDP(inConn *smux.Stream, localAddr, ID string) {
|
||||
|
||||
for {
|
||||
srcAddr, body, err := utils.ReadUDPPacket(inConn)
|
||||
if err != nil {
|
||||
log.Printf("udp packet revecived fail, err: %s", err)
|
||||
log.Printf("connection %s released", ID)
|
||||
inConn.Close()
|
||||
break
|
||||
} else {
|
||||
//log.Printf("udp packet revecived:%s,%v", srcAddr, body)
|
||||
go s.processUDPPacket(inConn, srcAddr, localAddr, body)
|
||||
}
|
||||
|
||||
}
|
||||
// }
|
||||
}
|
||||
func (s *MuxClient) processUDPPacket(inConn *smux.Stream, srcAddr, localAddr string, body []byte) {
|
||||
dstAddr, err := net.ResolveUDPAddr("udp", localAddr)
|
||||
if err != nil {
|
||||
log.Printf("can't resolve address: %s", err)
|
||||
inConn.Close()
|
||||
return
|
||||
}
|
||||
clientSrcAddr := &net.UDPAddr{IP: net.IPv4zero, Port: 0}
|
||||
conn, err := net.DialUDP("udp", clientSrcAddr, dstAddr)
|
||||
if err != nil {
|
||||
log.Printf("connect to udp %s fail,ERR:%s", dstAddr.String(), err)
|
||||
return
|
||||
}
|
||||
conn.SetDeadline(time.Now().Add(time.Millisecond * time.Duration(*s.cfg.Timeout)))
|
||||
_, err = conn.Write(body)
|
||||
if err != nil {
|
||||
log.Printf("send udp packet to %s fail,ERR:%s", dstAddr.String(), err)
|
||||
return
|
||||
}
|
||||
//log.Printf("send udp packet to %s success", dstAddr.String())
|
||||
buf := make([]byte, 1024)
|
||||
length, _, err := conn.ReadFromUDP(buf)
|
||||
if err != nil {
|
||||
log.Printf("read udp response from %s fail ,ERR:%s", dstAddr.String(), err)
|
||||
return
|
||||
}
|
||||
respBody := buf[0:length]
|
||||
//log.Printf("revecived udp packet from %s , %v", dstAddr.String(), respBody)
|
||||
bs := utils.UDPPacket(srcAddr, respBody)
|
||||
_, err = (*inConn).Write(bs)
|
||||
if err != nil {
|
||||
log.Printf("send udp response fail ,ERR:%s", err)
|
||||
inConn.Close()
|
||||
return
|
||||
}
|
||||
//log.Printf("send udp response success ,from:%s ,%d ,%v", dstAddr.String(), len(bs), bs)
|
||||
}
|
||||
func (s *MuxClient) ServeConn(inConn *smux.Stream, localAddr, ID string) {
|
||||
var err error
|
||||
var outConn net.Conn
|
||||
i := 0
|
||||
for {
|
||||
i++
|
||||
outConn, err = utils.ConnectHost(localAddr, *s.cfg.Timeout)
|
||||
if err == nil || i == 3 {
|
||||
break
|
||||
} else {
|
||||
if i == 3 {
|
||||
log.Printf("connect to %s err: %s, retrying...", localAddr, err)
|
||||
time.Sleep(2 * time.Second)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
inConn.Close()
|
||||
utils.CloseConn(&outConn)
|
||||
log.Printf("build connection error, err: %s", err)
|
||||
return
|
||||
}
|
||||
|
||||
log.Printf("stream %s created", ID)
|
||||
if *s.cfg.IsCompress {
|
||||
die1 := make(chan bool, 1)
|
||||
die2 := make(chan bool, 1)
|
||||
go func() {
|
||||
io.Copy(outConn, snappy.NewReader(inConn))
|
||||
die1 <- true
|
||||
}()
|
||||
go func() {
|
||||
io.Copy(snappy.NewWriter(inConn), outConn)
|
||||
die2 <- true
|
||||
}()
|
||||
select {
|
||||
case <-die1:
|
||||
case <-die2:
|
||||
}
|
||||
outConn.Close()
|
||||
inConn.Close()
|
||||
log.Printf("%s stream %s released", *s.cfg.Key, ID)
|
||||
} else {
|
||||
utils.IoBind(inConn, outConn, func(err interface{}) {
|
||||
log.Printf("stream %s released", ID)
|
||||
})
|
||||
}
|
||||
}
|
||||
370
services/mux_server.go
Normal file
370
services/mux_server.go
Normal file
@ -0,0 +1,370 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"math/rand"
|
||||
"net"
|
||||
"runtime/debug"
|
||||
"snail007/proxy/utils"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/golang/snappy"
|
||||
"github.com/xtaci/smux"
|
||||
)
|
||||
|
||||
type MuxServer struct {
|
||||
cfg MuxServerArgs
|
||||
udpChn chan MuxUDPItem
|
||||
sc utils.ServerChannel
|
||||
sessions utils.ConcurrentMap
|
||||
lockChn chan bool
|
||||
}
|
||||
|
||||
type MuxServerManager struct {
|
||||
cfg MuxServerArgs
|
||||
udpChn chan MuxUDPItem
|
||||
sc utils.ServerChannel
|
||||
serverID string
|
||||
}
|
||||
|
||||
func NewMuxServerManager() Service {
|
||||
return &MuxServerManager{
|
||||
cfg: MuxServerArgs{},
|
||||
udpChn: make(chan MuxUDPItem, 50000),
|
||||
serverID: utils.Uniqueid(),
|
||||
}
|
||||
}
|
||||
func (s *MuxServerManager) Start(args interface{}) (err error) {
|
||||
s.cfg = args.(MuxServerArgs)
|
||||
s.CheckArgs()
|
||||
if *s.cfg.Parent != "" {
|
||||
log.Printf("use %s parent %s", *s.cfg.ParentType, *s.cfg.Parent)
|
||||
} else {
|
||||
log.Fatalf("parent required")
|
||||
}
|
||||
|
||||
s.InitService()
|
||||
|
||||
log.Printf("server id: %s", s.serverID)
|
||||
//log.Printf("route:%v", *s.cfg.Route)
|
||||
for _, _info := range *s.cfg.Route {
|
||||
if _info == "" {
|
||||
continue
|
||||
}
|
||||
IsUDP := *s.cfg.IsUDP
|
||||
if strings.HasPrefix(_info, "udp://") {
|
||||
IsUDP = true
|
||||
}
|
||||
info := strings.TrimPrefix(_info, "udp://")
|
||||
info = strings.TrimPrefix(info, "tcp://")
|
||||
_routeInfo := strings.Split(info, "@")
|
||||
server := NewMuxServer()
|
||||
local := _routeInfo[0]
|
||||
remote := _routeInfo[1]
|
||||
KEY := *s.cfg.Key
|
||||
if strings.HasPrefix(remote, "[") {
|
||||
KEY = remote[1:strings.LastIndex(remote, "]")]
|
||||
remote = remote[strings.LastIndex(remote, "]")+1:]
|
||||
}
|
||||
if strings.HasPrefix(remote, ":") {
|
||||
remote = fmt.Sprintf("127.0.0.1%s", remote)
|
||||
}
|
||||
err = server.Start(MuxServerArgs{
|
||||
CertBytes: s.cfg.CertBytes,
|
||||
KeyBytes: s.cfg.KeyBytes,
|
||||
Parent: s.cfg.Parent,
|
||||
CertFile: s.cfg.CertFile,
|
||||
KeyFile: s.cfg.KeyFile,
|
||||
Local: &local,
|
||||
IsUDP: &IsUDP,
|
||||
Remote: &remote,
|
||||
Key: &KEY,
|
||||
Timeout: s.cfg.Timeout,
|
||||
Mgr: s,
|
||||
IsCompress: s.cfg.IsCompress,
|
||||
SessionCount: s.cfg.SessionCount,
|
||||
KCP: s.cfg.KCP,
|
||||
ParentType: s.cfg.ParentType,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
func (s *MuxServerManager) Clean() {
|
||||
s.StopService()
|
||||
}
|
||||
func (s *MuxServerManager) StopService() {
|
||||
}
|
||||
func (s *MuxServerManager) CheckArgs() {
|
||||
if *s.cfg.CertFile == "" || *s.cfg.KeyFile == "" {
|
||||
log.Fatalf("cert and key file required")
|
||||
}
|
||||
if *s.cfg.ParentType == "tls" {
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes = utils.TlsBytes(*s.cfg.CertFile, *s.cfg.KeyFile)
|
||||
}
|
||||
}
|
||||
func (s *MuxServerManager) InitService() {
|
||||
}
|
||||
|
||||
func NewMuxServer() Service {
|
||||
return &MuxServer{
|
||||
cfg: MuxServerArgs{},
|
||||
udpChn: make(chan MuxUDPItem, 50000),
|
||||
lockChn: make(chan bool, 1),
|
||||
sessions: utils.NewConcurrentMap(),
|
||||
}
|
||||
}
|
||||
|
||||
type MuxUDPItem struct {
|
||||
packet *[]byte
|
||||
localAddr *net.UDPAddr
|
||||
srcAddr *net.UDPAddr
|
||||
}
|
||||
|
||||
func (s *MuxServer) InitService() {
|
||||
s.UDPConnDeamon()
|
||||
}
|
||||
func (s *MuxServer) CheckArgs() {
|
||||
if *s.cfg.Remote == "" {
|
||||
log.Fatalf("remote required")
|
||||
}
|
||||
}
|
||||
|
||||
func (s *MuxServer) Start(args interface{}) (err error) {
|
||||
s.cfg = args.(MuxServerArgs)
|
||||
s.CheckArgs()
|
||||
s.InitService()
|
||||
host, port, _ := net.SplitHostPort(*s.cfg.Local)
|
||||
p, _ := strconv.Atoi(port)
|
||||
s.sc = utils.NewServerChannel(host, p)
|
||||
if *s.cfg.IsUDP {
|
||||
err = s.sc.ListenUDP(func(packet []byte, localAddr, srcAddr *net.UDPAddr) {
|
||||
s.udpChn <- MuxUDPItem{
|
||||
packet: &packet,
|
||||
localAddr: localAddr,
|
||||
srcAddr: srcAddr,
|
||||
}
|
||||
})
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
log.Printf("server on %s", (*s.sc.UDPListener).LocalAddr())
|
||||
} else {
|
||||
err = s.sc.ListenTCP(func(inConn net.Conn) {
|
||||
defer func() {
|
||||
if err := recover(); err != nil {
|
||||
log.Printf("connection handler crashed with err : %s \nstack: %s", err, string(debug.Stack()))
|
||||
}
|
||||
}()
|
||||
var outConn net.Conn
|
||||
var ID string
|
||||
for {
|
||||
outConn, ID, err = s.GetOutConn()
|
||||
if err != nil {
|
||||
utils.CloseConn(&outConn)
|
||||
log.Printf("connect to %s fail, err: %s, retrying...", *s.cfg.Parent, err)
|
||||
time.Sleep(time.Second * 3)
|
||||
continue
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
log.Printf("%s stream %s created", *s.cfg.Key, ID)
|
||||
if *s.cfg.IsCompress {
|
||||
die1 := make(chan bool, 1)
|
||||
die2 := make(chan bool, 1)
|
||||
go func() {
|
||||
io.Copy(inConn, snappy.NewReader(outConn))
|
||||
die1 <- true
|
||||
}()
|
||||
go func() {
|
||||
io.Copy(snappy.NewWriter(outConn), inConn)
|
||||
die2 <- true
|
||||
}()
|
||||
select {
|
||||
case <-die1:
|
||||
case <-die2:
|
||||
}
|
||||
outConn.Close()
|
||||
inConn.Close()
|
||||
log.Printf("%s stream %s released", *s.cfg.Key, ID)
|
||||
} else {
|
||||
utils.IoBind(inConn, outConn, func(err interface{}) {
|
||||
log.Printf("%s stream %s released", *s.cfg.Key, ID)
|
||||
})
|
||||
}
|
||||
})
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
log.Printf("server on %s", (*s.sc.Listener).Addr())
|
||||
}
|
||||
return
|
||||
}
|
||||
func (s *MuxServer) Clean() {
|
||||
|
||||
}
|
||||
func (s *MuxServer) GetOutConn() (outConn net.Conn, ID string, err error) {
|
||||
i := 1
|
||||
if *s.cfg.SessionCount > 0 {
|
||||
i = rand.Intn(*s.cfg.SessionCount)
|
||||
}
|
||||
outConn, err = s.GetConn(fmt.Sprintf("%d", i))
|
||||
if err != nil {
|
||||
log.Printf("connection err: %s", err)
|
||||
return
|
||||
}
|
||||
remoteAddr := "tcp:" + *s.cfg.Remote
|
||||
if *s.cfg.IsUDP {
|
||||
remoteAddr = "udp:" + *s.cfg.Remote
|
||||
}
|
||||
ID = utils.Uniqueid()
|
||||
_, err = outConn.Write(utils.BuildPacketData(ID, remoteAddr, s.cfg.Mgr.serverID))
|
||||
if err != nil {
|
||||
log.Printf("write stream data err: %s ,retrying...", err)
|
||||
utils.CloseConn(&outConn)
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
func (s *MuxServer) GetConn(index string) (conn net.Conn, err error) {
|
||||
select {
|
||||
case s.lockChn <- true:
|
||||
default:
|
||||
err = fmt.Errorf("can not connect at same time")
|
||||
return
|
||||
}
|
||||
defer func() {
|
||||
<-s.lockChn
|
||||
}()
|
||||
var session *smux.Session
|
||||
_session, ok := s.sessions.Get(index)
|
||||
if !ok {
|
||||
var c net.Conn
|
||||
c, err = s.getParentConn()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
_, err = c.Write(utils.BuildPacket(CONN_SERVER, *s.cfg.Key, s.cfg.Mgr.serverID))
|
||||
if err != nil {
|
||||
c.Close()
|
||||
return
|
||||
}
|
||||
if err == nil {
|
||||
session, err = smux.Client(c, nil)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
s.sessions.Set(index, session)
|
||||
log.Printf("session[%s] created", index)
|
||||
} else {
|
||||
session = _session.(*smux.Session)
|
||||
}
|
||||
conn, err = session.OpenStream()
|
||||
if err != nil {
|
||||
session.Close()
|
||||
s.sessions.Remove(index)
|
||||
} else {
|
||||
go func() {
|
||||
for {
|
||||
if session.IsClosed() {
|
||||
s.sessions.Remove(index)
|
||||
break
|
||||
}
|
||||
time.Sleep(time.Second * 5)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
func (s *MuxServer) getParentConn() (conn net.Conn, err error) {
|
||||
if *s.cfg.ParentType == "tls" {
|
||||
var _conn tls.Conn
|
||||
_conn, err = utils.TlsConnectHost(*s.cfg.Parent, *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes, nil)
|
||||
if err == nil {
|
||||
conn = net.Conn(&_conn)
|
||||
}
|
||||
} else if *s.cfg.ParentType == "kcp" {
|
||||
conn, err = utils.ConnectKCPHost(*s.cfg.Parent, s.cfg.KCP)
|
||||
} else {
|
||||
conn, err = utils.ConnectHost(*s.cfg.Parent, *s.cfg.Timeout)
|
||||
}
|
||||
return
|
||||
}
|
||||
func (s *MuxServer) UDPConnDeamon() {
|
||||
go func() {
|
||||
defer func() {
|
||||
if err := recover(); err != nil {
|
||||
log.Printf("udp conn deamon crashed with err : %s \nstack: %s", err, string(debug.Stack()))
|
||||
}
|
||||
}()
|
||||
var outConn net.Conn
|
||||
var ID string
|
||||
var err error
|
||||
for {
|
||||
item := <-s.udpChn
|
||||
RETRY:
|
||||
if outConn == nil {
|
||||
for {
|
||||
outConn, ID, err = s.GetOutConn()
|
||||
if err != nil {
|
||||
outConn = nil
|
||||
utils.CloseConn(&outConn)
|
||||
log.Printf("connect to %s fail, err: %s, retrying...", *s.cfg.Parent, err)
|
||||
time.Sleep(time.Second * 3)
|
||||
continue
|
||||
} else {
|
||||
go func(outConn net.Conn, ID string) {
|
||||
go func() {
|
||||
// outConn.Close()
|
||||
}()
|
||||
for {
|
||||
srcAddrFromConn, body, err := utils.ReadUDPPacket(outConn)
|
||||
if err != nil {
|
||||
log.Printf("parse revecived udp packet fail, err: %s ,%v", err, body)
|
||||
log.Printf("UDP deamon connection %s exited", ID)
|
||||
break
|
||||
}
|
||||
//log.Printf("udp packet revecived over parent , local:%s", srcAddrFromConn)
|
||||
_srcAddr := strings.Split(srcAddrFromConn, ":")
|
||||
if len(_srcAddr) != 2 {
|
||||
log.Printf("parse revecived udp packet fail, addr error : %s", srcAddrFromConn)
|
||||
continue
|
||||
}
|
||||
port, _ := strconv.Atoi(_srcAddr[1])
|
||||
dstAddr := &net.UDPAddr{IP: net.ParseIP(_srcAddr[0]), Port: port}
|
||||
_, err = s.sc.UDPListener.WriteToUDP(body, dstAddr)
|
||||
if err != nil {
|
||||
log.Printf("udp response to local %s fail,ERR:%s", srcAddrFromConn, err)
|
||||
continue
|
||||
}
|
||||
//log.Printf("udp response to local %s success , %v", srcAddrFromConn, body)
|
||||
}
|
||||
}(outConn, ID)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
outConn.SetWriteDeadline(time.Now().Add(time.Second))
|
||||
_, err = outConn.Write(utils.UDPPacket(item.srcAddr.String(), *item.packet))
|
||||
outConn.SetWriteDeadline(time.Time{})
|
||||
if err != nil {
|
||||
utils.CloseConn(&outConn)
|
||||
outConn = nil
|
||||
log.Printf("write udp packet to %s fail ,flush err:%s ,retrying...", *s.cfg.Parent, err)
|
||||
goto RETRY
|
||||
}
|
||||
//log.Printf("write packet %v", *item.packet)
|
||||
}
|
||||
}()
|
||||
}
|
||||
@ -6,10 +6,10 @@ import (
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net"
|
||||
"proxy/utils"
|
||||
"proxy/utils/aes"
|
||||
"proxy/utils/socks"
|
||||
"runtime/debug"
|
||||
"snail007/proxy/utils"
|
||||
"snail007/proxy/utils/aes"
|
||||
"snail007/proxy/utils/socks"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@ -17,12 +17,13 @@ import (
|
||||
)
|
||||
|
||||
type Socks struct {
|
||||
cfg SocksArgs
|
||||
checker utils.Checker
|
||||
basicAuth utils.BasicAuth
|
||||
sshClient *ssh.Client
|
||||
lockChn chan bool
|
||||
udpSC utils.ServerChannel
|
||||
cfg SocksArgs
|
||||
checker utils.Checker
|
||||
basicAuth utils.BasicAuth
|
||||
sshClient *ssh.Client
|
||||
lockChn chan bool
|
||||
udpSC utils.ServerChannel
|
||||
domainResolver utils.DomainResolver
|
||||
}
|
||||
|
||||
func NewSocks() Service {
|
||||
@ -36,16 +37,28 @@ func NewSocks() Service {
|
||||
|
||||
func (s *Socks) CheckArgs() {
|
||||
var err error
|
||||
if *s.cfg.LocalType == "tls" {
|
||||
if *s.cfg.LocalType == "tls" || (*s.cfg.Parent != "" && *s.cfg.ParentType == "tls") {
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes = utils.TlsBytes(*s.cfg.CertFile, *s.cfg.KeyFile)
|
||||
if *s.cfg.CaCertFile != "" {
|
||||
s.cfg.CaCertBytes, err = ioutil.ReadFile(*s.cfg.CaCertFile)
|
||||
if err != nil {
|
||||
log.Fatalf("read ca file error,ERR:%s", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
if *s.cfg.Parent != "" {
|
||||
if *s.cfg.ParentType == "" {
|
||||
log.Fatalf("parent type unkown,use -T <tls|tcp|ssh>")
|
||||
}
|
||||
if *s.cfg.ParentType == "tls" {
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes = utils.TlsBytes(*s.cfg.CertFile, *s.cfg.KeyFile)
|
||||
log.Fatalf("parent type unkown,use -T <tls|tcp|ssh|kcp>")
|
||||
}
|
||||
// if *s.cfg.ParentType == "tls" {
|
||||
// s.cfg.CertBytes, s.cfg.KeyBytes = utils.TlsBytes(*s.cfg.CertFile, *s.cfg.KeyFile)
|
||||
// if *s.cfg.CaCertFile != "" {
|
||||
// s.cfg.CaCertBytes, err = ioutil.ReadFile(*s.cfg.CaCertFile)
|
||||
// if err != nil {
|
||||
// log.Fatalf("read ca file error,ERR:%s", err)
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
if *s.cfg.ParentType == "ssh" {
|
||||
if *s.cfg.SSHUser == "" {
|
||||
log.Fatalf("ssh user required")
|
||||
@ -77,6 +90,9 @@ func (s *Socks) CheckArgs() {
|
||||
}
|
||||
func (s *Socks) InitService() {
|
||||
s.InitBasicAuth()
|
||||
if *s.cfg.DNSAddress != "" {
|
||||
(*s).domainResolver = utils.NewDomainResolver(*s.cfg.DNSAddress, *s.cfg.DNSTTL)
|
||||
}
|
||||
s.checker = utils.NewChecker(*s.cfg.Timeout, int64(*s.cfg.Interval), *s.cfg.Blocked, *s.cfg.Direct)
|
||||
if *s.cfg.ParentType == "ssh" {
|
||||
err := s.ConnectSSH()
|
||||
@ -86,7 +102,7 @@ func (s *Socks) InitService() {
|
||||
go func() {
|
||||
//循环检查ssh网络连通性
|
||||
for {
|
||||
conn, err := utils.ConnectHost(*s.cfg.Parent, *s.cfg.Timeout*2)
|
||||
conn, err := utils.ConnectHost(s.Resolve(*s.cfg.Parent), *s.cfg.Timeout*2)
|
||||
if err == nil {
|
||||
_, err = conn.Write([]byte{0})
|
||||
}
|
||||
@ -106,7 +122,6 @@ func (s *Socks) InitService() {
|
||||
if *s.cfg.ParentType == "ssh" {
|
||||
log.Println("warn: socks udp not suppored for ssh")
|
||||
} else {
|
||||
|
||||
s.udpSC = utils.NewServerChannelHost(*s.cfg.UDPLocal)
|
||||
err := s.udpSC.ListenUDP(s.udpCallback)
|
||||
if err != nil {
|
||||
@ -135,9 +150,9 @@ func (s *Socks) Start(args interface{}) (err error) {
|
||||
if *s.cfg.LocalType == TYPE_TCP {
|
||||
err = sc.ListenTCP(s.socksConnCallback)
|
||||
} else if *s.cfg.LocalType == TYPE_TLS {
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, s.socksConnCallback)
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, nil, s.socksConnCallback)
|
||||
} else if *s.cfg.LocalType == TYPE_KCP {
|
||||
err = sc.ListenKCP(*s.cfg.KCPMethod, *s.cfg.KCPKey, s.socksConnCallback)
|
||||
err = sc.ListenKCP(s.cfg.KCP, s.socksConnCallback)
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
@ -188,7 +203,7 @@ func (s *Socks) udpCallback(b []byte, localAddr, srcAddr *net.UDPAddr) {
|
||||
if parent == "" {
|
||||
parent = *s.cfg.Parent
|
||||
}
|
||||
dstAddr, err := net.ResolveUDPAddr("udp", parent)
|
||||
dstAddr, err := net.ResolveUDPAddr("udp", s.Resolve(parent))
|
||||
if err != nil {
|
||||
log.Printf("can't resolve address: %s", err)
|
||||
return
|
||||
@ -244,7 +259,7 @@ func (s *Socks) udpCallback(b []byte, localAddr, srcAddr *net.UDPAddr) {
|
||||
|
||||
} else {
|
||||
//本地代理
|
||||
dstAddr, err := net.ResolveUDPAddr("udp", net.JoinHostPort(p.Host(), p.Port()))
|
||||
dstAddr, err := net.ResolveUDPAddr("udp", net.JoinHostPort(s.Resolve(p.Host()), p.Port()))
|
||||
if err != nil {
|
||||
log.Printf("can't resolve address: %s", err)
|
||||
return
|
||||
@ -416,15 +431,22 @@ func (s *Socks) proxyTCP(inConn *net.Conn, methodReq socks.MethodsRequest, reque
|
||||
outConn, err = s.getOutConn(methodReq.Bytes(), request.Bytes(), request.Addr())
|
||||
} else {
|
||||
if *s.cfg.Parent != "" {
|
||||
s.checker.Add(request.Addr(), true, "", "", nil)
|
||||
useProxy, _, _ = s.checker.IsBlocked(request.Addr())
|
||||
host, _, _ := net.SplitHostPort(request.Addr())
|
||||
useProxy := false
|
||||
if utils.IsIternalIP(host) {
|
||||
useProxy = false
|
||||
} else {
|
||||
k := s.Resolve(request.Addr())
|
||||
s.checker.Add(k)
|
||||
useProxy, _, _ = s.checker.IsBlocked(k)
|
||||
}
|
||||
if useProxy {
|
||||
outConn, err = s.getOutConn(methodReq.Bytes(), request.Bytes(), request.Addr())
|
||||
} else {
|
||||
outConn, err = utils.ConnectHost(request.Addr(), *s.cfg.Timeout)
|
||||
outConn, err = utils.ConnectHost(s.Resolve(request.Addr()), *s.cfg.Timeout)
|
||||
}
|
||||
} else {
|
||||
outConn, err = utils.ConnectHost(request.Addr(), *s.cfg.Timeout)
|
||||
outConn, err = utils.ConnectHost(s.Resolve(request.Addr()), *s.cfg.Timeout)
|
||||
useProxy = false
|
||||
}
|
||||
}
|
||||
@ -461,12 +483,12 @@ func (s *Socks) getOutConn(methodBytes, reqBytes []byte, host string) (outConn n
|
||||
case "tcp":
|
||||
if *s.cfg.ParentType == "tls" {
|
||||
var _outConn tls.Conn
|
||||
_outConn, err = utils.TlsConnectHost(*s.cfg.Parent, *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes)
|
||||
_outConn, err = utils.TlsConnectHost(s.Resolve(*s.cfg.Parent), *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes, nil)
|
||||
outConn = net.Conn(&_outConn)
|
||||
} else if *s.cfg.ParentType == "kcp" {
|
||||
outConn, err = utils.ConnectKCPHost(*s.cfg.Parent, *s.cfg.KCPMethod, *s.cfg.KCPKey)
|
||||
outConn, err = utils.ConnectKCPHost(s.Resolve(*s.cfg.Parent), s.cfg.KCP)
|
||||
} else {
|
||||
outConn, err = utils.ConnectHost(*s.cfg.Parent, *s.cfg.Timeout)
|
||||
outConn, err = utils.ConnectHost(s.Resolve(*s.cfg.Parent), *s.cfg.Timeout)
|
||||
}
|
||||
if err != nil {
|
||||
err = fmt.Errorf("connect fail,%s", err)
|
||||
@ -556,12 +578,16 @@ func (s *Socks) ConnectSSH() (err error) {
|
||||
if s.sshClient != nil {
|
||||
s.sshClient.Close()
|
||||
}
|
||||
s.sshClient, err = ssh.Dial("tcp", *s.cfg.Parent, &config)
|
||||
s.sshClient, err = ssh.Dial("tcp", s.Resolve(*s.cfg.Parent), &config)
|
||||
<-s.lockChn
|
||||
return
|
||||
}
|
||||
func (s *Socks) InitBasicAuth() (err error) {
|
||||
s.basicAuth = utils.NewBasicAuth()
|
||||
if *s.cfg.DNSAddress != "" {
|
||||
s.basicAuth = utils.NewBasicAuth(&(*s).domainResolver)
|
||||
} else {
|
||||
s.basicAuth = utils.NewBasicAuth(nil)
|
||||
}
|
||||
if *s.cfg.AuthURL != "" {
|
||||
s.basicAuth.SetAuthURL(*s.cfg.AuthURL, *s.cfg.AuthURLOkCode, *s.cfg.AuthURLTimeout, *s.cfg.AuthURLRetry)
|
||||
log.Printf("auth from %s", *s.cfg.AuthURL)
|
||||
@ -595,7 +621,11 @@ func (s *Socks) IsDeadLoop(inLocalAddr string, host string) bool {
|
||||
}
|
||||
if inPort == outPort {
|
||||
var outIPs []net.IP
|
||||
outIPs, err = net.LookupIP(outDomain)
|
||||
if *s.cfg.DNSAddress != "" {
|
||||
outIPs = []net.IP{net.ParseIP(s.Resolve(outDomain))}
|
||||
} else {
|
||||
outIPs, err = net.LookupIP(outDomain)
|
||||
}
|
||||
if err == nil {
|
||||
for _, ip := range outIPs {
|
||||
if ip.String() == inIP {
|
||||
@ -619,3 +649,13 @@ func (s *Socks) IsDeadLoop(inLocalAddr string, host string) bool {
|
||||
}
|
||||
return false
|
||||
}
|
||||
func (s *Socks) Resolve(address string) string {
|
||||
if *s.cfg.DNSAddress == "" {
|
||||
return address
|
||||
}
|
||||
ip, err := s.domainResolver.Resolve(address)
|
||||
if err != nil {
|
||||
log.Printf("dns error %s , ERR:%s", address, err)
|
||||
}
|
||||
return ip
|
||||
}
|
||||
|
||||
333
services/sps.go
Normal file
333
services/sps.go
Normal file
@ -0,0 +1,333 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net"
|
||||
"runtime/debug"
|
||||
"snail007/proxy/utils"
|
||||
"snail007/proxy/utils/socks"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type SPS struct {
|
||||
outPool utils.OutPool
|
||||
cfg SPSArgs
|
||||
domainResolver utils.DomainResolver
|
||||
}
|
||||
|
||||
func NewSPS() Service {
|
||||
return &SPS{
|
||||
outPool: utils.OutPool{},
|
||||
cfg: SPSArgs{},
|
||||
}
|
||||
}
|
||||
func (s *SPS) CheckArgs() {
|
||||
if *s.cfg.Parent == "" {
|
||||
log.Fatalf("parent required for %s %s", s.cfg.Protocol(), *s.cfg.Local)
|
||||
}
|
||||
if *s.cfg.ParentType == "" {
|
||||
log.Fatalf("parent type unkown,use -T <tls|tcp|kcp>")
|
||||
}
|
||||
if *s.cfg.ParentType == TYPE_TLS || *s.cfg.LocalType == TYPE_TLS {
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes = utils.TlsBytes(*s.cfg.CertFile, *s.cfg.KeyFile)
|
||||
if *s.cfg.CaCertFile != "" {
|
||||
var err error
|
||||
s.cfg.CaCertBytes, err = ioutil.ReadFile(*s.cfg.CaCertFile)
|
||||
if err != nil {
|
||||
log.Fatalf("read ca file error,ERR:%s", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
func (s *SPS) InitService() {
|
||||
s.InitOutConnPool()
|
||||
}
|
||||
func (s *SPS) InitOutConnPool() {
|
||||
if *s.cfg.ParentType == TYPE_TLS || *s.cfg.ParentType == TYPE_TCP || *s.cfg.ParentType == TYPE_KCP {
|
||||
//dur int, isTLS bool, certBytes, keyBytes []byte,
|
||||
//parent string, timeout int, InitialCap int, MaxCap int
|
||||
s.outPool = utils.NewOutPool(
|
||||
0,
|
||||
*s.cfg.ParentType,
|
||||
s.cfg.KCP,
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes, nil,
|
||||
*s.cfg.Parent,
|
||||
*s.cfg.Timeout,
|
||||
0,
|
||||
0,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SPS) StopService() {
|
||||
if s.outPool.Pool != nil {
|
||||
s.outPool.Pool.ReleaseAll()
|
||||
}
|
||||
}
|
||||
func (s *SPS) Start(args interface{}) (err error) {
|
||||
s.cfg = args.(SPSArgs)
|
||||
s.CheckArgs()
|
||||
log.Printf("use %s %s parent %s", *s.cfg.ParentType, *s.cfg.ParentServiceType, *s.cfg.Parent)
|
||||
s.InitService()
|
||||
|
||||
for _, addr := range strings.Split(*s.cfg.Local, ",") {
|
||||
if addr != "" {
|
||||
host, port, _ := net.SplitHostPort(*s.cfg.Local)
|
||||
p, _ := strconv.Atoi(port)
|
||||
sc := utils.NewServerChannel(host, p)
|
||||
if *s.cfg.LocalType == TYPE_TCP {
|
||||
err = sc.ListenTCP(s.callback)
|
||||
} else if *s.cfg.LocalType == TYPE_TLS {
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, nil, s.callback)
|
||||
} else if *s.cfg.LocalType == TYPE_KCP {
|
||||
err = sc.ListenKCP(s.cfg.KCP, s.callback)
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
log.Printf("%s http(s)+socks proxy on %s", s.cfg.Protocol(), (*sc.Listener).Addr())
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (s *SPS) Clean() {
|
||||
s.StopService()
|
||||
}
|
||||
func (s *SPS) callback(inConn net.Conn) {
|
||||
defer func() {
|
||||
if err := recover(); err != nil {
|
||||
log.Printf("%s conn handler crashed with err : %s \nstack: %s", s.cfg.Protocol(), err, string(debug.Stack()))
|
||||
}
|
||||
}()
|
||||
var err error
|
||||
switch *s.cfg.ParentType {
|
||||
case TYPE_KCP:
|
||||
fallthrough
|
||||
case TYPE_TCP:
|
||||
fallthrough
|
||||
case TYPE_TLS:
|
||||
err = s.OutToTCP(&inConn)
|
||||
default:
|
||||
err = fmt.Errorf("unkown parent type %s", *s.cfg.ParentType)
|
||||
}
|
||||
if err != nil {
|
||||
log.Printf("connect to %s parent %s fail, ERR:%s", *s.cfg.ParentType, *s.cfg.Parent, err)
|
||||
utils.CloseConn(&inConn)
|
||||
}
|
||||
}
|
||||
func (s *SPS) OutToTCP(inConn *net.Conn) (err error) {
|
||||
buf := make([]byte, 1024)
|
||||
n, err := (*inConn).Read(buf)
|
||||
header := buf[:n]
|
||||
if err != nil {
|
||||
log.Printf("ERR:%s", err)
|
||||
utils.CloseConn(inConn)
|
||||
return
|
||||
}
|
||||
address := ""
|
||||
var forwardBytes []byte
|
||||
//fmt.Printf("%v", header)
|
||||
if header[0] == socks.VERSION_V5 {
|
||||
//socks
|
||||
methodReq, e := socks.NewMethodsRequest(*inConn, header)
|
||||
if e != nil {
|
||||
log.Printf("new method request err:%s", e)
|
||||
utils.CloseConn(inConn)
|
||||
err = e.(error)
|
||||
return
|
||||
}
|
||||
if !methodReq.Select(socks.Method_NO_AUTH) {
|
||||
methodReq.Reply(socks.Method_NONE_ACCEPTABLE)
|
||||
utils.CloseConn(inConn)
|
||||
log.Printf("none method found : Method_NO_AUTH")
|
||||
return
|
||||
}
|
||||
//method select reply
|
||||
err = methodReq.Reply(socks.Method_NO_AUTH)
|
||||
if err != nil {
|
||||
log.Printf("reply answer data fail,ERR: %s", err)
|
||||
utils.CloseConn(inConn)
|
||||
return
|
||||
}
|
||||
//request detail
|
||||
request, e := socks.NewRequest(*inConn)
|
||||
if e != nil {
|
||||
log.Printf("read request data fail,ERR: %s", e)
|
||||
utils.CloseConn(inConn)
|
||||
err = e.(error)
|
||||
return
|
||||
}
|
||||
if request.CMD() != socks.CMD_CONNECT {
|
||||
//只支持tcp
|
||||
request.TCPReply(socks.REP_UNKNOWN)
|
||||
utils.CloseConn(inConn)
|
||||
err = errors.New("cmd not supported")
|
||||
return
|
||||
}
|
||||
address = request.Addr()
|
||||
request.TCPReply(socks.REP_SUCCESS)
|
||||
} else if bytes.IndexByte(header, '\n') != -1 {
|
||||
//http
|
||||
var request utils.HTTPRequest
|
||||
request, err = utils.NewHTTPRequest(inConn, 1024, false, nil, header)
|
||||
if err != nil {
|
||||
log.Printf("new http request fail,ERR: %s", err)
|
||||
utils.CloseConn(inConn)
|
||||
return
|
||||
}
|
||||
if len(header) >= 7 && strings.ToLower(string(header[:7])) == "connect" {
|
||||
//https
|
||||
request.HTTPSReply()
|
||||
//log.Printf("https reply: %s", request.Host)
|
||||
} else {
|
||||
forwardBytes = request.HeadBuf
|
||||
}
|
||||
address = request.Host
|
||||
} else {
|
||||
log.Printf("unknown request from: %s,%s", (*inConn).RemoteAddr(), string(header))
|
||||
utils.CloseConn(inConn)
|
||||
err = errors.New("unknown request")
|
||||
return
|
||||
}
|
||||
//connect to parent
|
||||
var outConn net.Conn
|
||||
var _outConn interface{}
|
||||
_outConn, err = s.outPool.Pool.Get()
|
||||
if err == nil {
|
||||
outConn = _outConn.(net.Conn)
|
||||
}
|
||||
if err != nil {
|
||||
log.Printf("connect to %s , err:%s", *s.cfg.Parent, err)
|
||||
utils.CloseConn(inConn)
|
||||
return
|
||||
}
|
||||
//ask parent for connect to target address
|
||||
if *s.cfg.ParentServiceType == "http" {
|
||||
//http parent
|
||||
fmt.Fprintf(outConn, "CONNECT %s HTTP/1.1\r\n", address)
|
||||
reply := make([]byte, 100)
|
||||
n, err = outConn.Read(reply)
|
||||
if err != nil {
|
||||
log.Printf("read reply from %s , err:%s", *s.cfg.Parent, err)
|
||||
utils.CloseConn(inConn)
|
||||
utils.CloseConn(&outConn)
|
||||
return
|
||||
}
|
||||
//log.Printf("reply: %s", string(reply[:n]))
|
||||
} else {
|
||||
log.Printf("connect %s", address)
|
||||
//socks parent
|
||||
//send auth type
|
||||
_, err = outConn.Write([]byte{0x05, 0x01, 0x00})
|
||||
if err != nil {
|
||||
log.Printf("write method to %s fail, err:%s", *s.cfg.Parent, err)
|
||||
utils.CloseConn(inConn)
|
||||
utils.CloseConn(&outConn)
|
||||
return
|
||||
}
|
||||
//read reply
|
||||
reply := make([]byte, 512)
|
||||
n, err = outConn.Read(reply)
|
||||
if err != nil {
|
||||
log.Printf("read reply from %s , err:%s", *s.cfg.Parent, err)
|
||||
utils.CloseConn(inConn)
|
||||
utils.CloseConn(&outConn)
|
||||
return
|
||||
}
|
||||
//log.Printf("method reply %v", reply[:n])
|
||||
|
||||
//build request
|
||||
buf, err = s.buildRequest(address)
|
||||
if err != nil {
|
||||
log.Printf("build request to %s fail , err:%s", *s.cfg.Parent, err)
|
||||
utils.CloseConn(inConn)
|
||||
utils.CloseConn(&outConn)
|
||||
return
|
||||
}
|
||||
//send address request
|
||||
_, err = outConn.Write(buf)
|
||||
if err != nil {
|
||||
log.Printf("write request to %s fail, err:%s", *s.cfg.Parent, err)
|
||||
utils.CloseConn(inConn)
|
||||
utils.CloseConn(&outConn)
|
||||
return
|
||||
}
|
||||
//read reply
|
||||
reply = make([]byte, 512)
|
||||
n, err = outConn.Read(reply)
|
||||
if err != nil {
|
||||
log.Printf("read reply from %s , err:%s", *s.cfg.Parent, err)
|
||||
utils.CloseConn(inConn)
|
||||
utils.CloseConn(&outConn)
|
||||
return
|
||||
}
|
||||
|
||||
//log.Printf("request reply %v", reply[:n])
|
||||
}
|
||||
//forward client data to target,if necessary.
|
||||
if len(forwardBytes) > 0 {
|
||||
outConn.Write(forwardBytes)
|
||||
}
|
||||
//bind
|
||||
inAddr := (*inConn).RemoteAddr().String()
|
||||
outAddr := outConn.RemoteAddr().String()
|
||||
utils.IoBind((*inConn), outConn, func(err interface{}) {
|
||||
log.Printf("conn %s - %s released", inAddr, outAddr)
|
||||
})
|
||||
log.Printf("conn %s - %s connected", inAddr, outAddr)
|
||||
return
|
||||
}
|
||||
func (s *SPS) buildRequest(address string) (buf []byte, err error) {
|
||||
host, portStr, err := net.SplitHostPort(address)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
port, err := strconv.Atoi(portStr)
|
||||
if err != nil {
|
||||
err = errors.New("proxy: failed to parse port number: " + portStr)
|
||||
return
|
||||
}
|
||||
if port < 1 || port > 0xffff {
|
||||
err = errors.New("proxy: port number out of range: " + portStr)
|
||||
return
|
||||
}
|
||||
buf = buf[:0]
|
||||
buf = append(buf, 0x05, 0x01, 0 /* reserved */)
|
||||
|
||||
if ip := net.ParseIP(host); ip != nil {
|
||||
if ip4 := ip.To4(); ip4 != nil {
|
||||
buf = append(buf, 0x01)
|
||||
ip = ip4
|
||||
} else {
|
||||
buf = append(buf, 0x04)
|
||||
}
|
||||
buf = append(buf, ip...)
|
||||
} else {
|
||||
if len(host) > 255 {
|
||||
err = errors.New("proxy: destination host name too long: " + host)
|
||||
return
|
||||
}
|
||||
buf = append(buf, 0x03)
|
||||
buf = append(buf, byte(len(host)))
|
||||
buf = append(buf, host...)
|
||||
}
|
||||
buf = append(buf, byte(port>>8), byte(port))
|
||||
return
|
||||
}
|
||||
func (s *SPS) Resolve(address string) string {
|
||||
if *s.cfg.DNSAddress == "" {
|
||||
return address
|
||||
}
|
||||
ip, err := s.domainResolver.Resolve(address)
|
||||
if err != nil {
|
||||
log.Printf("dns error %s , ERR:%s", address, err)
|
||||
}
|
||||
return ip
|
||||
}
|
||||
@ -6,8 +6,8 @@ import (
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"proxy/utils"
|
||||
"runtime/debug"
|
||||
"snail007/proxy/utils"
|
||||
"time"
|
||||
|
||||
"strconv"
|
||||
@ -56,9 +56,9 @@ func (s *TCP) Start(args interface{}) (err error) {
|
||||
if *s.cfg.LocalType == TYPE_TCP {
|
||||
err = sc.ListenTCP(s.callback)
|
||||
} else if *s.cfg.LocalType == TYPE_TLS {
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, s.callback)
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, nil, s.callback)
|
||||
} else if *s.cfg.LocalType == TYPE_KCP {
|
||||
err = sc.ListenKCP(*s.cfg.KCPMethod, *s.cfg.KCPKey, s.callback)
|
||||
err = sc.ListenKCP(s.cfg.KCP, s.callback)
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
@ -171,9 +171,8 @@ func (s *TCP) InitOutConnPool() {
|
||||
s.outPool = utils.NewOutPool(
|
||||
*s.cfg.CheckParentInterval,
|
||||
*s.cfg.ParentType,
|
||||
*s.cfg.KCPMethod,
|
||||
*s.cfg.KCPKey,
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes,
|
||||
s.cfg.KCP,
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes, nil,
|
||||
*s.cfg.Parent,
|
||||
*s.cfg.Timeout,
|
||||
*s.cfg.PoolSize,
|
||||
|
||||
@ -4,7 +4,7 @@ import (
|
||||
"bufio"
|
||||
"log"
|
||||
"net"
|
||||
"proxy/utils"
|
||||
"snail007/proxy/utils"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
@ -51,7 +51,7 @@ func (s *TunnelBridge) Start(args interface{}) (err error) {
|
||||
p, _ := strconv.Atoi(port)
|
||||
sc := utils.NewServerChannel(host, p)
|
||||
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, func(inConn net.Conn) {
|
||||
err = sc.ListenTls(s.cfg.CertBytes, s.cfg.KeyBytes, nil, func(inConn net.Conn) {
|
||||
//log.Printf("connection from %s ", inConn.RemoteAddr())
|
||||
|
||||
reader := bufio.NewReader(inConn)
|
||||
|
||||
@ -6,7 +6,7 @@ import (
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"proxy/utils"
|
||||
"snail007/proxy/utils"
|
||||
"time"
|
||||
)
|
||||
|
||||
@ -161,7 +161,7 @@ func (s *TunnelClient) GetInConn(typ uint8, data ...string) (outConn net.Conn, e
|
||||
}
|
||||
func (s *TunnelClient) GetConn() (conn net.Conn, err error) {
|
||||
var _conn tls.Conn
|
||||
_conn, err = utils.TlsConnectHost(*s.cfg.Parent, *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes)
|
||||
_conn, err = utils.TlsConnectHost(*s.cfg.Parent, *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes, nil)
|
||||
if err == nil {
|
||||
conn = net.Conn(&_conn)
|
||||
}
|
||||
|
||||
@ -6,8 +6,8 @@ import (
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"proxy/utils"
|
||||
"runtime/debug"
|
||||
"snail007/proxy/utils"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
@ -174,7 +174,7 @@ func (s *TunnelServerManager) GetOutConn(typ uint8) (outConn net.Conn, ID string
|
||||
}
|
||||
func (s *TunnelServerManager) GetConn() (conn net.Conn, err error) {
|
||||
var _conn tls.Conn
|
||||
_conn, err = utils.TlsConnectHost(*s.cfg.Parent, *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes)
|
||||
_conn, err = utils.TlsConnectHost(*s.cfg.Parent, *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes, nil)
|
||||
if err == nil {
|
||||
conn = net.Conn(&_conn)
|
||||
}
|
||||
@ -280,7 +280,7 @@ func (s *TunnelServer) GetOutConn(typ uint8) (outConn net.Conn, ID string, err e
|
||||
}
|
||||
func (s *TunnelServer) GetConn() (conn net.Conn, err error) {
|
||||
var _conn tls.Conn
|
||||
_conn, err = utils.TlsConnectHost(*s.cfg.Parent, *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes)
|
||||
_conn, err = utils.TlsConnectHost(*s.cfg.Parent, *s.cfg.Timeout, s.cfg.CertBytes, s.cfg.KeyBytes, nil)
|
||||
if err == nil {
|
||||
conn = net.Conn(&_conn)
|
||||
}
|
||||
|
||||
@ -7,8 +7,9 @@ import (
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"proxy/utils"
|
||||
"runtime/debug"
|
||||
"snail007/proxy/services/kcpcfg"
|
||||
"snail007/proxy/utils"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
@ -208,8 +209,8 @@ func (s *UDP) InitOutConnPool() {
|
||||
s.outPool = utils.NewOutPool(
|
||||
*s.cfg.CheckParentInterval,
|
||||
*s.cfg.ParentType,
|
||||
"", "",
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes,
|
||||
kcpcfg.KCPConfigArgs{},
|
||||
s.cfg.CertBytes, s.cfg.KeyBytes, nil,
|
||||
*s.cfg.Parent,
|
||||
*s.cfg.Timeout,
|
||||
*s.cfg.PoolSize,
|
||||
|
||||
@ -1,4 +1,3 @@
|
||||
#!/bin/bash
|
||||
rm -rf /usr/bin/proxy
|
||||
rm -rf /usr/bin/proxyd
|
||||
echo "uninstall done"
|
||||
|
||||
@ -7,6 +7,7 @@ import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"encoding/binary"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
@ -17,9 +18,11 @@ import (
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"snail007/proxy/services/kcpcfg"
|
||||
|
||||
"golang.org/x/crypto/pbkdf2"
|
||||
|
||||
"snail007/proxy/utils/id"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
@ -83,14 +86,14 @@ func ioCopy(dst io.ReadWriter, src io.ReadWriter) (err error) {
|
||||
}
|
||||
}
|
||||
}
|
||||
func TlsConnectHost(host string, timeout int, certBytes, keyBytes []byte) (conn tls.Conn, err error) {
|
||||
func TlsConnectHost(host string, timeout int, certBytes, keyBytes, caCertBytes []byte) (conn tls.Conn, err error) {
|
||||
h := strings.Split(host, ":")
|
||||
port, _ := strconv.Atoi(h[1])
|
||||
return TlsConnect(h[0], port, timeout, certBytes, keyBytes)
|
||||
return TlsConnect(h[0], port, timeout, certBytes, keyBytes, caCertBytes)
|
||||
}
|
||||
|
||||
func TlsConnect(host string, port, timeout int, certBytes, keyBytes []byte) (conn tls.Conn, err error) {
|
||||
conf, err := getRequestTlsConfig(certBytes, keyBytes)
|
||||
func TlsConnect(host string, port, timeout int, certBytes, keyBytes, caCertBytes []byte) (conn tls.Conn, err error) {
|
||||
conf, err := getRequestTlsConfig(certBytes, keyBytes, caCertBytes)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
@ -100,22 +103,49 @@ func TlsConnect(host string, port, timeout int, certBytes, keyBytes []byte) (con
|
||||
}
|
||||
return *tls.Client(_conn, conf), err
|
||||
}
|
||||
func getRequestTlsConfig(certBytes, keyBytes []byte) (conf *tls.Config, err error) {
|
||||
func getRequestTlsConfig(certBytes, keyBytes, caCertBytes []byte) (conf *tls.Config, err error) {
|
||||
|
||||
var cert tls.Certificate
|
||||
cert, err = tls.X509KeyPair(certBytes, keyBytes)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
serverCertPool := x509.NewCertPool()
|
||||
ok := serverCertPool.AppendCertsFromPEM(certBytes)
|
||||
caBytes := certBytes
|
||||
if caCertBytes != nil {
|
||||
caBytes = caCertBytes
|
||||
|
||||
}
|
||||
ok := serverCertPool.AppendCertsFromPEM(caBytes)
|
||||
if !ok {
|
||||
err = errors.New("failed to parse root certificate")
|
||||
}
|
||||
block, _ := pem.Decode(caBytes)
|
||||
if block == nil {
|
||||
panic("failed to parse certificate PEM")
|
||||
}
|
||||
x509Cert, _ := x509.ParseCertificate(block.Bytes)
|
||||
if x509Cert == nil {
|
||||
panic("failed to parse block")
|
||||
}
|
||||
conf = &tls.Config{
|
||||
RootCAs: serverCertPool,
|
||||
Certificates: []tls.Certificate{cert},
|
||||
ServerName: "proxy",
|
||||
InsecureSkipVerify: false,
|
||||
InsecureSkipVerify: true,
|
||||
ServerName: x509Cert.Subject.CommonName,
|
||||
VerifyPeerCertificate: func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {
|
||||
opts := x509.VerifyOptions{
|
||||
Roots: serverCertPool,
|
||||
}
|
||||
for _, rawCert := range rawCerts {
|
||||
cert, _ := x509.ParseCertificate(rawCert)
|
||||
_, err := cert.Verify(opts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
}
|
||||
return
|
||||
}
|
||||
@ -124,31 +154,41 @@ func ConnectHost(hostAndPort string, timeout int) (conn net.Conn, err error) {
|
||||
conn, err = net.DialTimeout("tcp", hostAndPort, time.Duration(timeout)*time.Millisecond)
|
||||
return
|
||||
}
|
||||
func ConnectKCPHost(hostAndPort, method, key string) (conn net.Conn, err error) {
|
||||
kcpconn, err := kcp.DialWithOptions(hostAndPort, GetKCPBlock(method, key), 10, 3)
|
||||
func ConnectKCPHost(hostAndPort string, config kcpcfg.KCPConfigArgs) (conn net.Conn, err error) {
|
||||
kcpconn, err := kcp.DialWithOptions(hostAndPort, config.Block, *config.DataShard, *config.ParityShard)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
kcpconn.SetNoDelay(1, 10, 2, 1)
|
||||
kcpconn.SetWindowSize(1024, 1024)
|
||||
kcpconn.SetMtu(1400)
|
||||
kcpconn.SetACKNoDelay(false)
|
||||
return kcpconn, err
|
||||
kcpconn.SetStreamMode(true)
|
||||
kcpconn.SetWriteDelay(true)
|
||||
kcpconn.SetNoDelay(*config.NoDelay, *config.Interval, *config.Resend, *config.NoCongestion)
|
||||
kcpconn.SetMtu(*config.MTU)
|
||||
kcpconn.SetWindowSize(*config.SndWnd, *config.RcvWnd)
|
||||
kcpconn.SetACKNoDelay(*config.AckNodelay)
|
||||
if *config.NoComp {
|
||||
return kcpconn, err
|
||||
}
|
||||
return NewCompStream(kcpconn), err
|
||||
}
|
||||
func ListenTls(ip string, port int, certBytes, keyBytes []byte) (ln *net.Listener, err error) {
|
||||
|
||||
func ListenTls(ip string, port int, certBytes, keyBytes, caCertBytes []byte) (ln *net.Listener, err error) {
|
||||
|
||||
var cert tls.Certificate
|
||||
cert, err = tls.X509KeyPair(certBytes, keyBytes)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
clientCertPool := x509.NewCertPool()
|
||||
ok := clientCertPool.AppendCertsFromPEM(certBytes)
|
||||
caBytes := certBytes
|
||||
if caCertBytes != nil {
|
||||
caBytes = caCertBytes
|
||||
}
|
||||
ok := clientCertPool.AppendCertsFromPEM(caBytes)
|
||||
if !ok {
|
||||
err = errors.New("failed to parse root certificate")
|
||||
}
|
||||
config := &tls.Config{
|
||||
ClientCAs: clientCertPool,
|
||||
ServerName: "proxy",
|
||||
Certificates: []tls.Certificate{cert},
|
||||
ClientAuth: tls.RequireAndVerifyClientCert,
|
||||
}
|
||||
@ -193,20 +233,95 @@ func CloseConn(conn *net.Conn) {
|
||||
}
|
||||
}
|
||||
func Keygen() (err error) {
|
||||
cmd := exec.Command("sh", "-c", "openssl genrsa -out proxy.key 2048")
|
||||
out, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("err:%s", err)
|
||||
return
|
||||
CList := []string{"AD", "AE", "AF", "AG", "AI", "AL", "AM", "AO", "AR", "AT", "AU", "AZ", "BB", "BD", "BE", "BF", "BG", "BH", "BI", "BJ", "BL", "BM", "BN", "BO", "BR", "BS", "BW", "BY", "BZ", "CA", "CF", "CG", "CH", "CK", "CL", "CM", "CN", "CO", "CR", "CS", "CU", "CY", "CZ", "DE", "DJ", "DK", "DO", "DZ", "EC", "EE", "EG", "ES", "ET", "FI", "FJ", "FR", "GA", "GB", "GD", "GE", "GF", "GH", "GI", "GM", "GN", "GR", "GT", "GU", "GY", "HK", "HN", "HT", "HU", "ID", "IE", "IL", "IN", "IQ", "IR", "IS", "IT", "JM", "JO", "JP", "KE", "KG", "KH", "KP", "KR", "KT", "KW", "KZ", "LA", "LB", "LC", "LI", "LK", "LR", "LS", "LT", "LU", "LV", "LY", "MA", "MC", "MD", "MG", "ML", "MM", "MN", "MO", "MS", "MT", "MU", "MV", "MW", "MX", "MY", "MZ", "NA", "NE", "NG", "NI", "NL", "NO", "NP", "NR", "NZ", "OM", "PA", "PE", "PF", "PG", "PH", "PK", "PL", "PR", "PT", "PY", "QA", "RO", "RU", "SA", "SB", "SC", "SD", "SE", "SG", "SI", "SK", "SL", "SM", "SN", "SO", "SR", "ST", "SV", "SY", "SZ", "TD", "TG", "TH", "TJ", "TM", "TN", "TO", "TR", "TT", "TW", "TZ", "UA", "UG", "US", "UY", "UZ", "VC", "VE", "VN", "YE", "YU", "ZA", "ZM", "ZR", "ZW"}
|
||||
domainSubfixList := []string{".com", ".edu", ".gov", ".int", ".mil", ".net", ".org", ".biz", ".info", ".pro", ".name", ".museum", ".coop", ".aero", ".xxx", ".idv", ".ac", ".ad", ".ae", ".af", ".ag", ".ai", ".al", ".am", ".an", ".ao", ".aq", ".ar", ".as", ".at", ".au", ".aw", ".az", ".ba", ".bb", ".bd", ".be", ".bf", ".bg", ".bh", ".bi", ".bj", ".bm", ".bn", ".bo", ".br", ".bs", ".bt", ".bv", ".bw", ".by", ".bz", ".ca", ".cc", ".cd", ".cf", ".cg", ".ch", ".ci", ".ck", ".cl", ".cm", ".cn", ".co", ".cr", ".cu", ".cv", ".cx", ".cy", ".cz", ".de", ".dj", ".dk", ".dm", ".do", ".dz", ".ec", ".ee", ".eg", ".eh", ".er", ".es", ".et", ".eu", ".fi", ".fj", ".fk", ".fm", ".fo", ".fr", ".ga", ".gd", ".ge", ".gf", ".gg", ".gh", ".gi", ".gl", ".gm", ".gn", ".gp", ".gq", ".gr", ".gs", ".gt", ".gu", ".gw", ".gy", ".hk", ".hm", ".hn", ".hr", ".ht", ".hu", ".id", ".ie", ".il", ".im", ".in", ".io", ".iq", ".ir", ".is", ".it", ".je", ".jm", ".jo", ".jp", ".ke", ".kg", ".kh", ".ki", ".km", ".kn", ".kp", ".kr", ".kw", ".ky", ".kz", ".la", ".lb", ".lc", ".li", ".lk", ".lr", ".ls", ".lt", ".lu", ".lv", ".ly", ".ma", ".mc", ".md", ".mg", ".mh", ".mk", ".ml", ".mm", ".mn", ".mo", ".mp", ".mq", ".mr", ".ms", ".mt", ".mu", ".mv", ".mw", ".mx", ".my", ".mz", ".na", ".nc", ".ne", ".nf", ".ng", ".ni", ".nl", ".no", ".np", ".nr", ".nu", ".nz", ".om", ".pa", ".pe", ".pf", ".pg", ".ph", ".pk", ".pl", ".pm", ".pn", ".pr", ".ps", ".pt", ".pw", ".py", ".qa", ".re", ".ro", ".ru", ".rw", ".sa", ".sb", ".sc", ".sd", ".se", ".sg", ".sh", ".si", ".sj", ".sk", ".sl", ".sm", ".sn", ".so", ".sr", ".st", ".sv", ".sy", ".sz", ".tc", ".td", ".tf", ".tg", ".th", ".tj", ".tk", ".tl", ".tm", ".tn", ".to", ".tp", ".tr", ".tt", ".tv", ".tw", ".tz", ".ua", ".ug", ".uk", ".um", ".us", ".uy", ".uz", ".va", ".vc", ".ve", ".vg", ".vi", ".vn", ".vu", ".wf", ".ws", ".ye", ".yt", ".yu", ".yr", ".za", ".zm", ".zw"}
|
||||
C := CList[int(RandInt(4))%len(CList)]
|
||||
ST := RandString(int(RandInt(4) % 10))
|
||||
O := RandString(int(RandInt(4) % 10))
|
||||
CN := strings.ToLower(RandString(int(RandInt(4)%10)) + domainSubfixList[int(RandInt(4))%len(domainSubfixList)])
|
||||
//log.Printf("C: %s, ST: %s, O: %s, CN: %s", C, ST, O, CN)
|
||||
var out []byte
|
||||
if len(os.Args) == 3 && os.Args[2] == "ca" {
|
||||
cmd := exec.Command("sh", "-c", "openssl genrsa -out ca.key 2048")
|
||||
out, err = cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("err:%s", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(out))
|
||||
|
||||
cmdStr := fmt.Sprintf("openssl req -new -key ca.key -x509 -days 36500 -out ca.crt -subj /C=%s/ST=%s/O=%s/CN=%s", C, ST, O, "*."+CN)
|
||||
cmd = exec.Command("sh", "-c", cmdStr)
|
||||
out, err = cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("err:%s", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(out))
|
||||
} else if len(os.Args) == 5 && os.Args[2] == "ca" && os.Args[3] != "" && os.Args[4] != "" {
|
||||
certBytes, _ := ioutil.ReadFile("ca.crt")
|
||||
block, _ := pem.Decode(certBytes)
|
||||
if block == nil || certBytes == nil {
|
||||
panic("failed to parse ca certificate PEM")
|
||||
}
|
||||
x509Cert, _ := x509.ParseCertificate(block.Bytes)
|
||||
if x509Cert == nil {
|
||||
panic("failed to parse block")
|
||||
}
|
||||
name := os.Args[3]
|
||||
days := os.Args[4]
|
||||
cmd := exec.Command("sh", "-c", "openssl genrsa -out "+name+".key 2048")
|
||||
out, err = cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("err:%s", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(out))
|
||||
|
||||
cmdStr := fmt.Sprintf("openssl req -new -key %s.key -out %s.csr -subj /C=%s/ST=%s/O=%s/CN=%s", name, name, C, ST, O, CN)
|
||||
fmt.Printf("%s", cmdStr)
|
||||
cmd = exec.Command("sh", "-c", cmdStr)
|
||||
out, err = cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("err:%s", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(out))
|
||||
|
||||
cmdStr = fmt.Sprintf("openssl x509 -req -days %s -in %s.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out %s.crt", days, name, name)
|
||||
fmt.Printf("%s", cmdStr)
|
||||
cmd = exec.Command("sh", "-c", cmdStr)
|
||||
out, err = cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("err:%s", err)
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Println(string(out))
|
||||
} else if len(os.Args) == 3 && os.Args[2] == "usage" {
|
||||
fmt.Println(`proxy keygen //generate proxy.crt and proxy.key
|
||||
proxy keygen ca //generate ca.crt and ca.key
|
||||
proxy keygen ca client0 30 //generate client0.crt client0.key and use ca.crt sign it with 30 days
|
||||
`)
|
||||
} else if len(os.Args) == 2 {
|
||||
cmd := exec.Command("sh", "-c", "openssl genrsa -out proxy.key 2048")
|
||||
out, err = cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("err:%s", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(out))
|
||||
|
||||
cmdStr := fmt.Sprintf("openssl req -new -key proxy.key -x509 -days 36500 -out proxy.crt -subj /C=%s/ST=%s/O=%s/CN=%s", C, ST, O, CN)
|
||||
cmd = exec.Command("sh", "-c", cmdStr)
|
||||
out, err = cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("err:%s", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(out))
|
||||
}
|
||||
fmt.Println(string(out))
|
||||
cmd = exec.Command("sh", "-c", `openssl req -new -key proxy.key -x509 -days 3650 -out proxy.crt -subj /C=CN/ST=BJ/O="Localhost Ltd"/CN=proxy`)
|
||||
out, err = cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("err:%s", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(out))
|
||||
|
||||
return
|
||||
}
|
||||
func GetAllInterfaceAddr() ([]net.IP, error) {
|
||||
@ -301,9 +416,33 @@ func ReadUDPPacket(_reader io.Reader) (srcAddr string, packet []byte, err error)
|
||||
return
|
||||
}
|
||||
func Uniqueid() string {
|
||||
var src = rand.NewSource(time.Now().UnixNano())
|
||||
s := fmt.Sprintf("%d", src.Int63())
|
||||
return s[len(s)-5:len(s)-1] + fmt.Sprintf("%d", uint64(time.Now().UnixNano()))[8:]
|
||||
return xid.New().String()
|
||||
// var src = rand.NewSource(time.Now().UnixNano())
|
||||
// s := fmt.Sprintf("%d", src.Int63())
|
||||
// return s[len(s)-5:len(s)-1] + fmt.Sprintf("%d", uint64(time.Now().UnixNano()))[8:]
|
||||
}
|
||||
func RandString(strlen int) string {
|
||||
codes := "QWERTYUIOPLKJHGFDSAZXCVBNMabcdefghijklmnopqrstuvwxyz0123456789"
|
||||
codeLen := len(codes)
|
||||
data := make([]byte, strlen)
|
||||
rand.Seed(time.Now().UnixNano() + rand.Int63() + rand.Int63() + rand.Int63() + rand.Int63())
|
||||
for i := 0; i < strlen; i++ {
|
||||
idx := rand.Intn(codeLen)
|
||||
data[i] = byte(codes[idx])
|
||||
}
|
||||
return string(data)
|
||||
}
|
||||
func RandInt(strLen int) int64 {
|
||||
codes := "123456789"
|
||||
codeLen := len(codes)
|
||||
data := make([]byte, strLen)
|
||||
rand.Seed(time.Now().UnixNano() + rand.Int63() + rand.Int63() + rand.Int63() + rand.Int63())
|
||||
for i := 0; i < strLen; i++ {
|
||||
idx := rand.Intn(codeLen)
|
||||
data[i] = byte(codes[idx])
|
||||
}
|
||||
i, _ := strconv.ParseInt(string(data), 10, 64)
|
||||
return i
|
||||
}
|
||||
func ReadData(r io.Reader) (data string, err error) {
|
||||
var len uint16
|
||||
@ -430,7 +569,7 @@ func GetKCPBlock(method, key string) (block kcp.BlockCrypt) {
|
||||
}
|
||||
return
|
||||
}
|
||||
func HttpGet(URL string, timeout int) (body []byte, code int, err error) {
|
||||
func HttpGet(URL string, timeout int, host ...string) (body []byte, code int, err error) {
|
||||
var tr *http.Transport
|
||||
var client *http.Client
|
||||
conf := &tls.Config{
|
||||
@ -444,7 +583,16 @@ func HttpGet(URL string, timeout int) (body []byte, code int, err error) {
|
||||
client = &http.Client{Timeout: time.Millisecond * time.Duration(timeout), Transport: tr}
|
||||
}
|
||||
defer tr.CloseIdleConnections()
|
||||
resp, err := client.Get(URL)
|
||||
|
||||
//resp, err := client.Get(URL)
|
||||
req, err := http.NewRequest("GET", URL, nil)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if len(host) == 1 && host[0] != "" {
|
||||
req.Host = host[0]
|
||||
}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
@ -453,6 +601,29 @@ func HttpGet(URL string, timeout int) (body []byte, code int, err error) {
|
||||
body, err = ioutil.ReadAll(resp.Body)
|
||||
return
|
||||
}
|
||||
func IsIternalIP(domainOrIP string) bool {
|
||||
var outIPs []net.IP
|
||||
outIPs, err := net.LookupIP(domainOrIP)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
for _, ip := range outIPs {
|
||||
if ip.IsLoopback() {
|
||||
return true
|
||||
}
|
||||
if ip.To4().Mask(net.IPv4Mask(255, 0, 0, 0)).String() == "10.0.0.0" {
|
||||
return true
|
||||
}
|
||||
if ip.To4().Mask(net.IPv4Mask(255, 0, 0, 0)).String() == "192.168.0.0" {
|
||||
return true
|
||||
}
|
||||
if ip.To4().Mask(net.IPv4Mask(255, 0, 0, 0)).String() == "172.0.0.0" {
|
||||
i, _ := strconv.Atoi(strings.Split(ip.To4().String(), ".")[1])
|
||||
return i >= 16 && i <= 31
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// type sockaddr struct {
|
||||
// family uint16
|
||||
|
||||
264
utils/id/xid.go
Normal file
264
utils/id/xid.go
Normal file
@ -0,0 +1,264 @@
|
||||
// Package xid is a globally unique id generator suited for web scale
|
||||
//
|
||||
// Xid is using Mongo Object ID algorithm to generate globally unique ids:
|
||||
// https://docs.mongodb.org/manual/reference/object-id/
|
||||
//
|
||||
// - 4-byte value representing the seconds since the Unix epoch,
|
||||
// - 3-byte machine identifier,
|
||||
// - 2-byte process id, and
|
||||
// - 3-byte counter, starting with a random value.
|
||||
//
|
||||
// The binary representation of the id is compatible with Mongo 12 bytes Object IDs.
|
||||
// The string representation is using base32 hex (w/o padding) for better space efficiency
|
||||
// when stored in that form (20 bytes). The hex variant of base32 is used to retain the
|
||||
// sortable property of the id.
|
||||
//
|
||||
// Xid doesn't use base64 because case sensitivity and the 2 non alphanum chars may be an
|
||||
// issue when transported as a string between various systems. Base36 wasn't retained either
|
||||
// because 1/ it's not standard 2/ the resulting size is not predictable (not bit aligned)
|
||||
// and 3/ it would not remain sortable. To validate a base32 `xid`, expect a 20 chars long,
|
||||
// all lowercase sequence of `a` to `v` letters and `0` to `9` numbers (`[0-9a-v]{20}`).
|
||||
//
|
||||
// UUID is 16 bytes (128 bits), snowflake is 8 bytes (64 bits), xid stands in between
|
||||
// with 12 bytes with a more compact string representation ready for the web and no
|
||||
// required configuration or central generation server.
|
||||
//
|
||||
// Features:
|
||||
//
|
||||
// - Size: 12 bytes (96 bits), smaller than UUID, larger than snowflake
|
||||
// - Base32 hex encoded by default (16 bytes storage when transported as printable string)
|
||||
// - Non configured, you don't need set a unique machine and/or data center id
|
||||
// - K-ordered
|
||||
// - Embedded time with 1 second precision
|
||||
// - Unicity guaranted for 16,777,216 (24 bits) unique ids per second and per host/process
|
||||
//
|
||||
// Best used with xlog's RequestIDHandler (https://godoc.org/github.com/rs/xlog#RequestIDHandler).
|
||||
//
|
||||
// References:
|
||||
//
|
||||
// - http://www.slideshare.net/davegardnerisme/unique-id-generation-in-distributed-systems
|
||||
// - https://en.wikipedia.org/wiki/Universally_unique_identifier
|
||||
// - https://blog.twitter.com/2010/announcing-snowflake
|
||||
package xid
|
||||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"crypto/rand"
|
||||
"database/sql/driver"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Code inspired from mgo/bson ObjectId
|
||||
|
||||
// ID represents a unique request id
|
||||
type ID [rawLen]byte
|
||||
|
||||
const (
|
||||
encodedLen = 20 // string encoded len
|
||||
decodedLen = 15 // len after base32 decoding with the padded data
|
||||
rawLen = 12 // binary raw len
|
||||
|
||||
// encoding stores a custom version of the base32 encoding with lower case
|
||||
// letters.
|
||||
encoding = "0123456789abcdefghijklmnopqrstuv"
|
||||
)
|
||||
|
||||
// ErrInvalidID is returned when trying to unmarshal an invalid ID
|
||||
var ErrInvalidID = errors.New("xid: invalid ID")
|
||||
|
||||
// objectIDCounter is atomically incremented when generating a new ObjectId
|
||||
// using NewObjectId() function. It's used as a counter part of an id.
|
||||
// This id is initialized with a random value.
|
||||
var objectIDCounter = randInt()
|
||||
|
||||
// machineId stores machine id generated once and used in subsequent calls
|
||||
// to NewObjectId function.
|
||||
var machineID = readMachineID()
|
||||
|
||||
// pid stores the current process id
|
||||
var pid = os.Getpid()
|
||||
|
||||
// dec is the decoding map for base32 encoding
|
||||
var dec [256]byte
|
||||
|
||||
func init() {
|
||||
for i := 0; i < len(dec); i++ {
|
||||
dec[i] = 0xFF
|
||||
}
|
||||
for i := 0; i < len(encoding); i++ {
|
||||
dec[encoding[i]] = byte(i)
|
||||
}
|
||||
}
|
||||
|
||||
// readMachineId generates machine id and puts it into the machineId global
|
||||
// variable. If this function fails to get the hostname, it will cause
|
||||
// a runtime error.
|
||||
func readMachineID() []byte {
|
||||
id := make([]byte, 3)
|
||||
if hostname, err := os.Hostname(); err == nil {
|
||||
hw := md5.New()
|
||||
hw.Write([]byte(hostname))
|
||||
copy(id, hw.Sum(nil))
|
||||
} else {
|
||||
// Fallback to rand number if machine id can't be gathered
|
||||
if _, randErr := rand.Reader.Read(id); randErr != nil {
|
||||
panic(fmt.Errorf("xid: cannot get hostname nor generate a random number: %v; %v", err, randErr))
|
||||
}
|
||||
}
|
||||
return id
|
||||
}
|
||||
|
||||
// randInt generates a random uint32
|
||||
func randInt() uint32 {
|
||||
b := make([]byte, 3)
|
||||
if _, err := rand.Reader.Read(b); err != nil {
|
||||
panic(fmt.Errorf("xid: cannot generate random number: %v;", err))
|
||||
}
|
||||
return uint32(b[0])<<16 | uint32(b[1])<<8 | uint32(b[2])
|
||||
}
|
||||
|
||||
// New generates a globaly unique ID
|
||||
func New() ID {
|
||||
var id ID
|
||||
// Timestamp, 4 bytes, big endian
|
||||
binary.BigEndian.PutUint32(id[:], uint32(time.Now().Unix()))
|
||||
// Machine, first 3 bytes of md5(hostname)
|
||||
id[4] = machineID[0]
|
||||
id[5] = machineID[1]
|
||||
id[6] = machineID[2]
|
||||
// Pid, 2 bytes, specs don't specify endianness, but we use big endian.
|
||||
id[7] = byte(pid >> 8)
|
||||
id[8] = byte(pid)
|
||||
// Increment, 3 bytes, big endian
|
||||
i := atomic.AddUint32(&objectIDCounter, 1)
|
||||
id[9] = byte(i >> 16)
|
||||
id[10] = byte(i >> 8)
|
||||
id[11] = byte(i)
|
||||
return id
|
||||
}
|
||||
|
||||
// FromString reads an ID from its string representation
|
||||
func FromString(id string) (ID, error) {
|
||||
i := &ID{}
|
||||
err := i.UnmarshalText([]byte(id))
|
||||
return *i, err
|
||||
}
|
||||
|
||||
// String returns a base32 hex lowercased with no padding representation of the id (char set is 0-9, a-v).
|
||||
func (id ID) String() string {
|
||||
text := make([]byte, encodedLen)
|
||||
encode(text, id[:])
|
||||
return string(text)
|
||||
}
|
||||
|
||||
// MarshalText implements encoding/text TextMarshaler interface
|
||||
func (id ID) MarshalText() ([]byte, error) {
|
||||
text := make([]byte, encodedLen)
|
||||
encode(text, id[:])
|
||||
return text, nil
|
||||
}
|
||||
|
||||
// encode by unrolling the stdlib base32 algorithm + removing all safe checks
|
||||
func encode(dst, id []byte) {
|
||||
dst[0] = encoding[id[0]>>3]
|
||||
dst[1] = encoding[(id[1]>>6)&0x1F|(id[0]<<2)&0x1F]
|
||||
dst[2] = encoding[(id[1]>>1)&0x1F]
|
||||
dst[3] = encoding[(id[2]>>4)&0x1F|(id[1]<<4)&0x1F]
|
||||
dst[4] = encoding[id[3]>>7|(id[2]<<1)&0x1F]
|
||||
dst[5] = encoding[(id[3]>>2)&0x1F]
|
||||
dst[6] = encoding[id[4]>>5|(id[3]<<3)&0x1F]
|
||||
dst[7] = encoding[id[4]&0x1F]
|
||||
dst[8] = encoding[id[5]>>3]
|
||||
dst[9] = encoding[(id[6]>>6)&0x1F|(id[5]<<2)&0x1F]
|
||||
dst[10] = encoding[(id[6]>>1)&0x1F]
|
||||
dst[11] = encoding[(id[7]>>4)&0x1F|(id[6]<<4)&0x1F]
|
||||
dst[12] = encoding[id[8]>>7|(id[7]<<1)&0x1F]
|
||||
dst[13] = encoding[(id[8]>>2)&0x1F]
|
||||
dst[14] = encoding[(id[9]>>5)|(id[8]<<3)&0x1F]
|
||||
dst[15] = encoding[id[9]&0x1F]
|
||||
dst[16] = encoding[id[10]>>3]
|
||||
dst[17] = encoding[(id[11]>>6)&0x1F|(id[10]<<2)&0x1F]
|
||||
dst[18] = encoding[(id[11]>>1)&0x1F]
|
||||
dst[19] = encoding[(id[11]<<4)&0x1F]
|
||||
}
|
||||
|
||||
// UnmarshalText implements encoding/text TextUnmarshaler interface
|
||||
func (id *ID) UnmarshalText(text []byte) error {
|
||||
if len(text) != encodedLen {
|
||||
return ErrInvalidID
|
||||
}
|
||||
for _, c := range text {
|
||||
if dec[c] == 0xFF {
|
||||
return ErrInvalidID
|
||||
}
|
||||
}
|
||||
decode(id, text)
|
||||
return nil
|
||||
}
|
||||
|
||||
// decode by unrolling the stdlib base32 algorithm + removing all safe checks
|
||||
func decode(id *ID, src []byte) {
|
||||
id[0] = dec[src[0]]<<3 | dec[src[1]]>>2
|
||||
id[1] = dec[src[1]]<<6 | dec[src[2]]<<1 | dec[src[3]]>>4
|
||||
id[2] = dec[src[3]]<<4 | dec[src[4]]>>1
|
||||
id[3] = dec[src[4]]<<7 | dec[src[5]]<<2 | dec[src[6]]>>3
|
||||
id[4] = dec[src[6]]<<5 | dec[src[7]]
|
||||
id[5] = dec[src[8]]<<3 | dec[src[9]]>>2
|
||||
id[6] = dec[src[9]]<<6 | dec[src[10]]<<1 | dec[src[11]]>>4
|
||||
id[7] = dec[src[11]]<<4 | dec[src[12]]>>1
|
||||
id[8] = dec[src[12]]<<7 | dec[src[13]]<<2 | dec[src[14]]>>3
|
||||
id[9] = dec[src[14]]<<5 | dec[src[15]]
|
||||
id[10] = dec[src[16]]<<3 | dec[src[17]]>>2
|
||||
id[11] = dec[src[17]]<<6 | dec[src[18]]<<1 | dec[src[19]]>>4
|
||||
}
|
||||
|
||||
// Time returns the timestamp part of the id.
|
||||
// It's a runtime error to call this method with an invalid id.
|
||||
func (id ID) Time() time.Time {
|
||||
// First 4 bytes of ObjectId is 32-bit big-endian seconds from epoch.
|
||||
secs := int64(binary.BigEndian.Uint32(id[0:4]))
|
||||
return time.Unix(secs, 0)
|
||||
}
|
||||
|
||||
// Machine returns the 3-byte machine id part of the id.
|
||||
// It's a runtime error to call this method with an invalid id.
|
||||
func (id ID) Machine() []byte {
|
||||
return id[4:7]
|
||||
}
|
||||
|
||||
// Pid returns the process id part of the id.
|
||||
// It's a runtime error to call this method with an invalid id.
|
||||
func (id ID) Pid() uint16 {
|
||||
return binary.BigEndian.Uint16(id[7:9])
|
||||
}
|
||||
|
||||
// Counter returns the incrementing value part of the id.
|
||||
// It's a runtime error to call this method with an invalid id.
|
||||
func (id ID) Counter() int32 {
|
||||
b := id[9:12]
|
||||
// Counter is stored as big-endian 3-byte value
|
||||
return int32(uint32(b[0])<<16 | uint32(b[1])<<8 | uint32(b[2]))
|
||||
}
|
||||
|
||||
// Value implements the driver.Valuer interface.
|
||||
func (id ID) Value() (driver.Value, error) {
|
||||
b, err := id.MarshalText()
|
||||
return string(b), err
|
||||
}
|
||||
|
||||
// Scan implements the sql.Scanner interface.
|
||||
func (id *ID) Scan(value interface{}) (err error) {
|
||||
switch val := value.(type) {
|
||||
case string:
|
||||
return id.UnmarshalText([]byte(val))
|
||||
case []byte:
|
||||
return id.UnmarshalText(val)
|
||||
default:
|
||||
return fmt.Errorf("xid: scanning unsupported type: %T", value)
|
||||
}
|
||||
}
|
||||
@ -5,6 +5,7 @@ import (
|
||||
"log"
|
||||
"net"
|
||||
"runtime/debug"
|
||||
"snail007/proxy/services/kcpcfg"
|
||||
"strconv"
|
||||
|
||||
kcp "github.com/xtaci/kcp-go"
|
||||
@ -41,8 +42,8 @@ func NewServerChannelHost(host string) ServerChannel {
|
||||
func (sc *ServerChannel) SetErrAcceptHandler(fn func(err error)) {
|
||||
sc.errAcceptHandler = fn
|
||||
}
|
||||
func (sc *ServerChannel) ListenTls(certBytes, keyBytes []byte, fn func(conn net.Conn)) (err error) {
|
||||
sc.Listener, err = ListenTls(sc.ip, sc.port, certBytes, keyBytes)
|
||||
func (sc *ServerChannel) ListenTls(certBytes, keyBytes, caCertBytes []byte, fn func(conn net.Conn)) (err error) {
|
||||
sc.Listener, err = ListenTls(sc.ip, sc.port, certBytes, keyBytes, caCertBytes)
|
||||
if err == nil {
|
||||
go func() {
|
||||
defer func() {
|
||||
@ -138,11 +139,23 @@ func (sc *ServerChannel) ListenUDP(fn func(packet []byte, localAddr, srcAddr *ne
|
||||
}
|
||||
return
|
||||
}
|
||||
func (sc *ServerChannel) ListenKCP(method, key string, fn func(conn net.Conn)) (err error) {
|
||||
var l net.Listener
|
||||
l, err = kcp.ListenWithOptions(fmt.Sprintf("%s:%d", sc.ip, sc.port), GetKCPBlock(method, key), 10, 3)
|
||||
func (sc *ServerChannel) ListenKCP(config kcpcfg.KCPConfigArgs, fn func(conn net.Conn)) (err error) {
|
||||
lis, err := kcp.ListenWithOptions(fmt.Sprintf("%s:%d", sc.ip, sc.port), config.Block, *config.DataShard, *config.ParityShard)
|
||||
if err == nil {
|
||||
sc.Listener = &l
|
||||
if err = lis.SetDSCP(*config.DSCP); err != nil {
|
||||
log.Println("SetDSCP:", err)
|
||||
return
|
||||
}
|
||||
if err = lis.SetReadBuffer(*config.SockBuf); err != nil {
|
||||
log.Println("SetReadBuffer:", err)
|
||||
return
|
||||
}
|
||||
if err = lis.SetWriteBuffer(*config.SockBuf); err != nil {
|
||||
log.Println("SetWriteBuffer:", err)
|
||||
return
|
||||
}
|
||||
sc.Listener = new(net.Listener)
|
||||
*sc.Listener = lis
|
||||
go func() {
|
||||
defer func() {
|
||||
if e := recover(); e != nil {
|
||||
@ -150,8 +163,8 @@ func (sc *ServerChannel) ListenKCP(method, key string, fn func(conn net.Conn)) (
|
||||
}
|
||||
}()
|
||||
for {
|
||||
var conn net.Conn
|
||||
conn, err = (*sc.Listener).Accept()
|
||||
//var conn net.Conn
|
||||
conn, err := lis.AcceptKCP()
|
||||
if err == nil {
|
||||
go func() {
|
||||
defer func() {
|
||||
@ -159,7 +172,18 @@ func (sc *ServerChannel) ListenKCP(method, key string, fn func(conn net.Conn)) (
|
||||
log.Printf("kcp connection handler crashed , err : %s , \ntrace:%s", e, string(debug.Stack()))
|
||||
}
|
||||
}()
|
||||
fn(conn)
|
||||
conn.SetStreamMode(true)
|
||||
conn.SetWriteDelay(true)
|
||||
conn.SetNoDelay(*config.NoDelay, *config.Interval, *config.Resend, *config.NoCongestion)
|
||||
conn.SetMtu(*config.MTU)
|
||||
conn.SetWindowSize(*config.SndWnd, *config.RcvWnd)
|
||||
conn.SetACKNoDelay(*config.AckNodelay)
|
||||
if *config.NoComp {
|
||||
fn(conn)
|
||||
} else {
|
||||
cconn := NewCompStream(conn)
|
||||
fn(cconn)
|
||||
}
|
||||
}()
|
||||
} else {
|
||||
sc.errAcceptHandler(err)
|
||||
|
||||
173
utils/sni/sni.go
Normal file
173
utils/sni/sni.go
Normal file
@ -0,0 +1,173 @@
|
||||
package sni
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"errors"
|
||||
"io"
|
||||
"net"
|
||||
)
|
||||
|
||||
func ServerNameFromBytes(data []byte) (sn string, err error) {
|
||||
reader := bytes.NewReader(data)
|
||||
bufferedReader := bufio.NewReader(reader)
|
||||
c := bufferedConn{bufferedReader, nil, nil}
|
||||
sn, _, err = ServerNameFromConn(c)
|
||||
return
|
||||
}
|
||||
|
||||
type bufferedConn struct {
|
||||
r *bufio.Reader
|
||||
rout io.Reader
|
||||
net.Conn
|
||||
}
|
||||
|
||||
func newBufferedConn(c net.Conn) bufferedConn {
|
||||
return bufferedConn{bufio.NewReader(c), nil, c}
|
||||
}
|
||||
|
||||
func (b bufferedConn) Peek(n int) ([]byte, error) {
|
||||
return b.r.Peek(n)
|
||||
}
|
||||
|
||||
func (b bufferedConn) Read(p []byte) (int, error) {
|
||||
if b.rout != nil {
|
||||
return b.rout.Read(p)
|
||||
}
|
||||
return b.r.Read(p)
|
||||
}
|
||||
|
||||
var malformedError = errors.New("malformed client hello")
|
||||
|
||||
func getHello(b []byte) (string, error) {
|
||||
rest := b[5:]
|
||||
|
||||
if len(rest) == 0 {
|
||||
return "", malformedError
|
||||
}
|
||||
|
||||
current := 0
|
||||
handshakeType := rest[0]
|
||||
current += 1
|
||||
if handshakeType != 0x1 {
|
||||
return "", errors.New("Not a ClientHello")
|
||||
}
|
||||
|
||||
// Skip over another length
|
||||
current += 3
|
||||
// Skip over protocolversion
|
||||
current += 2
|
||||
// Skip over random number
|
||||
current += 4 + 28
|
||||
|
||||
if current > len(rest) {
|
||||
return "", malformedError
|
||||
}
|
||||
|
||||
// Skip over session ID
|
||||
sessionIDLength := int(rest[current])
|
||||
current += 1
|
||||
current += sessionIDLength
|
||||
|
||||
if current+1 > len(rest) {
|
||||
return "", malformedError
|
||||
}
|
||||
|
||||
cipherSuiteLength := (int(rest[current]) << 8) + int(rest[current+1])
|
||||
current += 2
|
||||
current += cipherSuiteLength
|
||||
|
||||
if current > len(rest) {
|
||||
return "", malformedError
|
||||
}
|
||||
compressionMethodLength := int(rest[current])
|
||||
current += 1
|
||||
current += compressionMethodLength
|
||||
|
||||
if current > len(rest) {
|
||||
return "", errors.New("no extensions")
|
||||
}
|
||||
|
||||
current += 2
|
||||
|
||||
hostname := ""
|
||||
for current+4 < len(rest) && hostname == "" {
|
||||
extensionType := (int(rest[current]) << 8) + int(rest[current+1])
|
||||
current += 2
|
||||
|
||||
extensionDataLength := (int(rest[current]) << 8) + int(rest[current+1])
|
||||
current += 2
|
||||
|
||||
if extensionType == 0 {
|
||||
|
||||
// Skip over number of names as we're assuming there's just one
|
||||
current += 2
|
||||
if current > len(rest) {
|
||||
return "", malformedError
|
||||
}
|
||||
|
||||
nameType := rest[current]
|
||||
current += 1
|
||||
if nameType != 0 {
|
||||
return "", errors.New("Not a hostname")
|
||||
}
|
||||
if current+1 > len(rest) {
|
||||
return "", malformedError
|
||||
}
|
||||
nameLen := (int(rest[current]) << 8) + int(rest[current+1])
|
||||
current += 2
|
||||
if current+nameLen > len(rest) {
|
||||
return "", malformedError
|
||||
}
|
||||
hostname = string(rest[current : current+nameLen])
|
||||
}
|
||||
|
||||
current += extensionDataLength
|
||||
}
|
||||
if hostname == "" {
|
||||
return "", errors.New("No hostname")
|
||||
}
|
||||
return hostname, nil
|
||||
|
||||
}
|
||||
|
||||
func getHelloBytes(c bufferedConn) ([]byte, error) {
|
||||
b, err := c.Peek(5)
|
||||
if err != nil {
|
||||
return []byte{}, err
|
||||
}
|
||||
|
||||
if b[0] != 0x16 {
|
||||
return []byte{}, errors.New("not TLS")
|
||||
}
|
||||
|
||||
restLengthBytes := b[3:]
|
||||
restLength := (int(restLengthBytes[0]) << 8) + int(restLengthBytes[1])
|
||||
|
||||
return c.Peek(5 + restLength)
|
||||
|
||||
}
|
||||
|
||||
func getServername(c bufferedConn) (string, []byte, error) {
|
||||
all, err := getHelloBytes(c)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
name, err := getHello(all)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
return name, all, err
|
||||
|
||||
}
|
||||
|
||||
// Uses SNI to get the name of the server from the connection. Returns the ServerName and a buffered connection that will not have been read off of.
|
||||
func ServerNameFromConn(c net.Conn) (string, net.Conn, error) {
|
||||
bufconn := newBufferedConn(c)
|
||||
sn, helloBytes, err := getServername(bufconn)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
bufconn.rout = io.MultiReader(bytes.NewBuffer(helloBytes), c)
|
||||
return sn, bufconn, nil
|
||||
}
|
||||
@ -53,14 +53,19 @@ type Request struct {
|
||||
rw io.ReadWriter
|
||||
}
|
||||
|
||||
func NewRequest(rw io.ReadWriter) (req Request, err interface{}) {
|
||||
var b [1024]byte
|
||||
func NewRequest(rw io.ReadWriter, header ...[]byte) (req Request, err interface{}) {
|
||||
var b = make([]byte, 1024)
|
||||
var n int
|
||||
req = Request{rw: rw}
|
||||
n, err = rw.Read(b[:])
|
||||
if err != nil {
|
||||
err = fmt.Errorf("read req data fail,ERR: %s", err)
|
||||
return
|
||||
if len(header) == 1 {
|
||||
b = header[0]
|
||||
n = len(header[0])
|
||||
} else {
|
||||
n, err = rw.Read(b[:])
|
||||
if err != nil {
|
||||
err = fmt.Errorf("read req data fail,ERR: %s", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
req.ver = uint8(b[0])
|
||||
req.cmd = uint8(b[1])
|
||||
@ -150,7 +155,7 @@ type MethodsRequest struct {
|
||||
rw *io.ReadWriter
|
||||
}
|
||||
|
||||
func NewMethodsRequest(r io.ReadWriter) (s MethodsRequest, err interface{}) {
|
||||
func NewMethodsRequest(r io.ReadWriter, header ...[]byte) (s MethodsRequest, err interface{}) {
|
||||
defer func() {
|
||||
if err == nil {
|
||||
err = recover()
|
||||
@ -160,9 +165,14 @@ func NewMethodsRequest(r io.ReadWriter) (s MethodsRequest, err interface{}) {
|
||||
s.rw = &r
|
||||
var buf = make([]byte, 300)
|
||||
var n int
|
||||
n, err = r.Read(buf)
|
||||
if err != nil {
|
||||
return
|
||||
if len(header) == 1 {
|
||||
buf = header[0]
|
||||
n = len(header[0])
|
||||
} else {
|
||||
n, err = r.Read(buf)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if buf[0] != 0x05 {
|
||||
err = fmt.Errorf("socks version not supported")
|
||||
|
||||
385
utils/structs.go
385
utils/structs.go
@ -5,15 +5,21 @@ import (
|
||||
"crypto/tls"
|
||||
"encoding/base64"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net"
|
||||
"net/url"
|
||||
"snail007/proxy/services/kcpcfg"
|
||||
"snail007/proxy/utils/sni"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/golang/snappy"
|
||||
"github.com/miekg/dns"
|
||||
)
|
||||
|
||||
type Checker struct {
|
||||
@ -51,7 +57,10 @@ func NewChecker(timeout int, interval int64, blockedFile, directFile string) Che
|
||||
if !ch.directMap.IsEmpty() {
|
||||
log.Printf("direct file loaded , domains : %d", ch.directMap.Count())
|
||||
}
|
||||
ch.start()
|
||||
if interval > 0 {
|
||||
ch.start()
|
||||
}
|
||||
|
||||
return ch
|
||||
}
|
||||
|
||||
@ -74,21 +83,19 @@ func (c *Checker) loadMap(f string) (dataMap ConcurrentMap) {
|
||||
}
|
||||
func (c *Checker) start() {
|
||||
go func() {
|
||||
//log.Printf("checker started")
|
||||
for {
|
||||
//log.Printf("checker did")
|
||||
for _, v := range c.data.Items() {
|
||||
go func(item CheckerItem) {
|
||||
if c.isNeedCheck(item) {
|
||||
//log.Printf("check %s", item.Domain)
|
||||
//log.Printf("check %s", item.Host)
|
||||
var conn net.Conn
|
||||
var err error
|
||||
if item.IsHTTPS {
|
||||
conn, err = ConnectHost(item.Host, c.timeout)
|
||||
if err == nil {
|
||||
conn.SetDeadline(time.Now().Add(time.Millisecond))
|
||||
conn.Close()
|
||||
}
|
||||
} else {
|
||||
err = HTTPGet(item.URL, c.timeout)
|
||||
conn, err = ConnectHost(item.Host, c.timeout)
|
||||
if err == nil {
|
||||
conn.SetDeadline(time.Now().Add(time.Millisecond))
|
||||
conn.Close()
|
||||
}
|
||||
if err != nil {
|
||||
item.FailCount = item.FailCount + 1
|
||||
@ -155,22 +162,13 @@ func (c *Checker) domainIsInMap(address string, blockedMap bool) bool {
|
||||
}
|
||||
return false
|
||||
}
|
||||
func (c *Checker) Add(address string, isHTTPS bool, method, URL string, data []byte) {
|
||||
func (c *Checker) Add(address string) {
|
||||
if c.domainIsInMap(address, false) || c.domainIsInMap(address, true) {
|
||||
return
|
||||
}
|
||||
if !isHTTPS && strings.ToLower(method) != "get" {
|
||||
return
|
||||
}
|
||||
var item CheckerItem
|
||||
u := strings.Split(address, ":")
|
||||
item = CheckerItem{
|
||||
URL: URL,
|
||||
Domain: u[0],
|
||||
Host: address,
|
||||
Data: data,
|
||||
IsHTTPS: isHTTPS,
|
||||
Method: method,
|
||||
Host: address,
|
||||
}
|
||||
c.data.SetIfAbsent(item.Host, item)
|
||||
}
|
||||
@ -181,11 +179,13 @@ type BasicAuth struct {
|
||||
authOkCode int
|
||||
authTimeout int
|
||||
authRetry int
|
||||
dns *DomainResolver
|
||||
}
|
||||
|
||||
func NewBasicAuth() BasicAuth {
|
||||
func NewBasicAuth(dns *DomainResolver) BasicAuth {
|
||||
return BasicAuth{
|
||||
data: NewConcurrentMap(),
|
||||
dns: dns,
|
||||
}
|
||||
}
|
||||
func (ba *BasicAuth) SetAuthURL(URL string, code, timeout, retry int) {
|
||||
@ -249,18 +249,27 @@ func (ba *BasicAuth) checkFromURL(userpass, ip, target string) (err error) {
|
||||
if len(u) != 2 {
|
||||
return
|
||||
}
|
||||
|
||||
URL := ba.authURL
|
||||
if strings.Contains(URL, "?") {
|
||||
URL += "&"
|
||||
} else {
|
||||
URL += "?"
|
||||
}
|
||||
URL += fmt.Sprintf("user=%s&pass=%s&ip=%s&target=%s", u[0], u[1], ip, target)
|
||||
URL += fmt.Sprintf("user=%s&pass=%s&ip=%s&target=%s", u[0], u[1], ip, url.QueryEscape(target))
|
||||
getURL := URL
|
||||
var domain string
|
||||
if ba.dns != nil {
|
||||
_url, _ := url.Parse(ba.authURL)
|
||||
domain = _url.Host
|
||||
domainIP := ba.dns.MustResolve(domain)
|
||||
getURL = strings.Replace(URL, domain, domainIP, 1)
|
||||
}
|
||||
var code int
|
||||
var tryCount = 0
|
||||
var body []byte
|
||||
for tryCount <= ba.authRetry {
|
||||
body, code, err = HttpGet(URL, ba.authTimeout)
|
||||
body, code, err = HttpGet(getURL, ba.authTimeout, domain)
|
||||
if err == nil && code == ba.authOkCode {
|
||||
break
|
||||
} else if err != nil {
|
||||
@ -302,28 +311,44 @@ type HTTPRequest struct {
|
||||
basicAuth *BasicAuth
|
||||
}
|
||||
|
||||
func NewHTTPRequest(inConn *net.Conn, bufSize int, isBasicAuth bool, basicAuth *BasicAuth) (req HTTPRequest, err error) {
|
||||
func NewHTTPRequest(inConn *net.Conn, bufSize int, isBasicAuth bool, basicAuth *BasicAuth, header ...[]byte) (req HTTPRequest, err error) {
|
||||
buf := make([]byte, bufSize)
|
||||
len := 0
|
||||
n := 0
|
||||
req = HTTPRequest{
|
||||
conn: inConn,
|
||||
}
|
||||
len, err = (*inConn).Read(buf[:])
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
err = fmt.Errorf("http decoder read err:%s", err)
|
||||
if len(header) == 1 {
|
||||
buf = header[0]
|
||||
n = len(header[0])
|
||||
} else {
|
||||
n, err = (*inConn).Read(buf[:])
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
err = fmt.Errorf("http decoder read err:%s", err)
|
||||
}
|
||||
CloseConn(inConn)
|
||||
return
|
||||
}
|
||||
CloseConn(inConn)
|
||||
return
|
||||
}
|
||||
req.HeadBuf = buf[:len]
|
||||
index := bytes.IndexByte(req.HeadBuf, '\n')
|
||||
if index == -1 {
|
||||
err = fmt.Errorf("http decoder data line err:%s", SubStr(string(req.HeadBuf), 0, 50))
|
||||
CloseConn(inConn)
|
||||
return
|
||||
|
||||
req.HeadBuf = buf[:n]
|
||||
//fmt.Println(string(req.HeadBuf))
|
||||
//try sni
|
||||
serverName, err0 := sni.ServerNameFromBytes(req.HeadBuf)
|
||||
if err0 == nil {
|
||||
//sni success
|
||||
req.Method = "SNI"
|
||||
req.hostOrURL = "https://" + serverName + ":443"
|
||||
} else {
|
||||
//sni fail , try http
|
||||
index := bytes.IndexByte(req.HeadBuf, '\n')
|
||||
if index == -1 {
|
||||
err = fmt.Errorf("http decoder data line err:%s", SubStr(string(req.HeadBuf), 0, 50))
|
||||
CloseConn(inConn)
|
||||
return
|
||||
}
|
||||
fmt.Sscanf(string(req.HeadBuf[:index]), "%s%s", &req.Method, &req.hostOrURL)
|
||||
}
|
||||
fmt.Sscanf(string(req.HeadBuf[:index]), "%s%s", &req.Method, &req.hostOrURL)
|
||||
if req.Method == "" || req.hostOrURL == "" {
|
||||
err = fmt.Errorf("http decoder data err:%s", SubStr(string(req.HeadBuf), 0, 50))
|
||||
CloseConn(inConn)
|
||||
@ -348,22 +373,25 @@ func (req *HTTPRequest) HTTP() (err error) {
|
||||
return
|
||||
}
|
||||
}
|
||||
req.URL, err = req.getHTTPURL()
|
||||
if err == nil {
|
||||
var u *url.URL
|
||||
u, err = url.Parse(req.URL)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
req.Host = u.Host
|
||||
req.addPortIfNot()
|
||||
req.URL = req.getHTTPURL()
|
||||
var u *url.URL
|
||||
u, err = url.Parse(req.URL)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
req.Host = u.Host
|
||||
req.addPortIfNot()
|
||||
return
|
||||
}
|
||||
func (req *HTTPRequest) HTTPS() (err error) {
|
||||
if req.isBasicAuth {
|
||||
err = req.BasicAuth()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
req.Host = req.hostOrURL
|
||||
req.addPortIfNot()
|
||||
//_, err = fmt.Fprint(*req.conn, "HTTP/1.1 200 Connection established\r\n\r\n")
|
||||
return
|
||||
}
|
||||
func (req *HTTPRequest) HTTPSReply() (err error) {
|
||||
@ -376,22 +404,20 @@ func (req *HTTPRequest) IsHTTPS() bool {
|
||||
|
||||
func (req *HTTPRequest) BasicAuth() (err error) {
|
||||
|
||||
//log.Printf("request :%s", string(b[:n]))
|
||||
authorization, err := req.getHeader("Authorization")
|
||||
if err != nil {
|
||||
fmt.Fprint((*req.conn), "HTTP/1.1 401 Unauthorized\r\nWWW-Authenticate: Basic realm=\"\"\r\n\r\nUnauthorized")
|
||||
// log.Printf("request :%s", string(req.HeadBuf))
|
||||
code := "407"
|
||||
authorization := req.getHeader("Proxy-Authorization")
|
||||
// if authorization == "" {
|
||||
// authorization = req.getHeader("Authorization")
|
||||
// code = "401"
|
||||
// }
|
||||
if authorization == "" {
|
||||
fmt.Fprintf((*req.conn), "HTTP/1.1 %s Unauthorized\r\nWWW-Authenticate: Basic realm=\"\"\r\n\r\nUnauthorized", code)
|
||||
CloseConn(req.conn)
|
||||
err = errors.New("require auth header data")
|
||||
return
|
||||
}
|
||||
if authorization == "" {
|
||||
authorization, err = req.getHeader("Proxy-Authorization")
|
||||
if err != nil {
|
||||
fmt.Fprint((*req.conn), "HTTP/1.1 401 Unauthorized\r\nWWW-Authenticate: Basic realm=\"\"\r\n\r\nUnauthorized")
|
||||
CloseConn(req.conn)
|
||||
return
|
||||
}
|
||||
}
|
||||
//log.Printf("Authorization:%s", authorization)
|
||||
//log.Printf("Authorization:%authorization = req.getHeader("Authorization")
|
||||
basic := strings.Fields(authorization)
|
||||
if len(basic) != 2 {
|
||||
err = fmt.Errorf("authorization data error,ERR:%s", authorization)
|
||||
@ -409,30 +435,30 @@ func (req *HTTPRequest) BasicAuth() (err error) {
|
||||
if req.IsHTTPS() {
|
||||
URL = "https://" + req.Host
|
||||
} else {
|
||||
URL, _ = req.getHTTPURL()
|
||||
URL = req.getHTTPURL()
|
||||
}
|
||||
authOk := (*req.basicAuth).Check(string(user), addr[0], URL)
|
||||
//log.Printf("auth %s,%v", string(user), authOk)
|
||||
if !authOk {
|
||||
fmt.Fprint((*req.conn), "HTTP/1.1 401 Unauthorized\r\n\r\nUnauthorized")
|
||||
fmt.Fprintf((*req.conn), "HTTP/1.1 %s Unauthorized\r\n\r\nUnauthorized", code)
|
||||
CloseConn(req.conn)
|
||||
err = fmt.Errorf("basic auth fail")
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
func (req *HTTPRequest) getHTTPURL() (URL string, err error) {
|
||||
func (req *HTTPRequest) getHTTPURL() (URL string) {
|
||||
if !strings.HasPrefix(req.hostOrURL, "/") {
|
||||
return req.hostOrURL, nil
|
||||
return req.hostOrURL
|
||||
}
|
||||
_host, err := req.getHeader("host")
|
||||
if err != nil {
|
||||
_host := req.getHeader("host")
|
||||
if _host == "" {
|
||||
return
|
||||
}
|
||||
URL = fmt.Sprintf("http://%s%s", _host, req.hostOrURL)
|
||||
return
|
||||
}
|
||||
func (req *HTTPRequest) getHeader(key string) (val string, err error) {
|
||||
func (req *HTTPRequest) getHeader(key string) (val string) {
|
||||
key = strings.ToUpper(key)
|
||||
lines := strings.Split(string(req.HeadBuf), "\r\n")
|
||||
//log.Println(lines)
|
||||
@ -465,27 +491,27 @@ func (req *HTTPRequest) addPortIfNot() (newHost string) {
|
||||
}
|
||||
|
||||
type OutPool struct {
|
||||
Pool ConnPool
|
||||
dur int
|
||||
typ string
|
||||
certBytes []byte
|
||||
keyBytes []byte
|
||||
kcpMethod string
|
||||
kcpKey string
|
||||
address string
|
||||
timeout int
|
||||
Pool ConnPool
|
||||
dur int
|
||||
typ string
|
||||
certBytes []byte
|
||||
keyBytes []byte
|
||||
caCertBytes []byte
|
||||
kcp kcpcfg.KCPConfigArgs
|
||||
address string
|
||||
timeout int
|
||||
}
|
||||
|
||||
func NewOutPool(dur int, typ, kcpMethod, kcpKey string, certBytes, keyBytes []byte, address string, timeout int, InitialCap int, MaxCap int) (op OutPool) {
|
||||
func NewOutPool(dur int, typ string, kcp kcpcfg.KCPConfigArgs, certBytes, keyBytes, caCertBytes []byte, address string, timeout int, InitialCap int, MaxCap int) (op OutPool) {
|
||||
op = OutPool{
|
||||
dur: dur,
|
||||
typ: typ,
|
||||
certBytes: certBytes,
|
||||
keyBytes: keyBytes,
|
||||
kcpMethod: kcpMethod,
|
||||
kcpKey: kcpKey,
|
||||
address: address,
|
||||
timeout: timeout,
|
||||
dur: dur,
|
||||
typ: typ,
|
||||
certBytes: certBytes,
|
||||
keyBytes: keyBytes,
|
||||
caCertBytes: caCertBytes,
|
||||
kcp: kcp,
|
||||
address: address,
|
||||
timeout: timeout,
|
||||
}
|
||||
var err error
|
||||
op.Pool, err = NewConnPool(poolConfig{
|
||||
@ -519,12 +545,12 @@ func NewOutPool(dur int, typ, kcpMethod, kcpKey string, certBytes, keyBytes []by
|
||||
func (op *OutPool) getConn() (conn interface{}, err error) {
|
||||
if op.typ == "tls" {
|
||||
var _conn tls.Conn
|
||||
_conn, err = TlsConnectHost(op.address, op.timeout, op.certBytes, op.keyBytes)
|
||||
_conn, err = TlsConnectHost(op.address, op.timeout, op.certBytes, op.keyBytes, op.caCertBytes)
|
||||
if err == nil {
|
||||
conn = net.Conn(&_conn)
|
||||
}
|
||||
} else if op.typ == "kcp" {
|
||||
conn, err = ConnectKCPHost(op.address, op.kcpMethod, op.kcpKey)
|
||||
conn, err = ConnectKCPHost(op.address, op.kcp)
|
||||
} else {
|
||||
conn, err = ConnectHost(op.address, op.timeout)
|
||||
}
|
||||
@ -761,3 +787,184 @@ func (cm *ConnManager) RemoveAll() {
|
||||
cm.Remove(k)
|
||||
}
|
||||
}
|
||||
|
||||
type ClientKeyRouter struct {
|
||||
keyChan chan string
|
||||
ctrl *ConcurrentMap
|
||||
lock *sync.Mutex
|
||||
}
|
||||
|
||||
func NewClientKeyRouter(ctrl *ConcurrentMap, size int) ClientKeyRouter {
|
||||
return ClientKeyRouter{
|
||||
keyChan: make(chan string, size),
|
||||
ctrl: ctrl,
|
||||
lock: &sync.Mutex{},
|
||||
}
|
||||
}
|
||||
func (c *ClientKeyRouter) GetKey() string {
|
||||
defer c.lock.Unlock()
|
||||
c.lock.Lock()
|
||||
if len(c.keyChan) == 0 {
|
||||
EXIT:
|
||||
for _, k := range c.ctrl.Keys() {
|
||||
select {
|
||||
case c.keyChan <- k:
|
||||
default:
|
||||
goto EXIT
|
||||
}
|
||||
}
|
||||
}
|
||||
for {
|
||||
if len(c.keyChan) == 0 {
|
||||
return "*"
|
||||
}
|
||||
select {
|
||||
case key := <-c.keyChan:
|
||||
if c.ctrl.Has(key) {
|
||||
return key
|
||||
}
|
||||
default:
|
||||
return "*"
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
type DomainResolver struct {
|
||||
ttl int
|
||||
dnsAddrress string
|
||||
data ConcurrentMap
|
||||
}
|
||||
type DomainResolverItem struct {
|
||||
ip string
|
||||
domain string
|
||||
expiredAt int64
|
||||
}
|
||||
|
||||
func NewDomainResolver(dnsAddrress string, ttl int) DomainResolver {
|
||||
|
||||
return DomainResolver{
|
||||
ttl: ttl,
|
||||
dnsAddrress: dnsAddrress,
|
||||
data: NewConcurrentMap(),
|
||||
}
|
||||
}
|
||||
func (a *DomainResolver) MustResolve(address string) (ip string) {
|
||||
ip, _ = a.Resolve(address)
|
||||
return
|
||||
}
|
||||
func (a *DomainResolver) Resolve(address string) (ip string, err error) {
|
||||
domain := address
|
||||
port := ""
|
||||
fromCache := "false"
|
||||
defer func() {
|
||||
if port != "" {
|
||||
ip = net.JoinHostPort(ip, port)
|
||||
}
|
||||
log.Printf("dns:%s->%s,cache:%s", address, ip, fromCache)
|
||||
//a.PrintData()
|
||||
}()
|
||||
if strings.Contains(domain, ":") {
|
||||
domain, port, err = net.SplitHostPort(domain)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if net.ParseIP(domain) != nil {
|
||||
ip = domain
|
||||
fromCache = "ip ignore"
|
||||
return
|
||||
}
|
||||
item, ok := a.data.Get(domain)
|
||||
if ok {
|
||||
//log.Println("find ", domain)
|
||||
if (*item.(*DomainResolverItem)).expiredAt > time.Now().Unix() {
|
||||
ip = (*item.(*DomainResolverItem)).ip
|
||||
fromCache = "true"
|
||||
//log.Println("from cache ", domain)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
item = &DomainResolverItem{
|
||||
domain: domain,
|
||||
}
|
||||
|
||||
}
|
||||
c := new(dns.Client)
|
||||
c.DialTimeout = time.Millisecond * 5000
|
||||
c.ReadTimeout = time.Millisecond * 5000
|
||||
c.WriteTimeout = time.Millisecond * 5000
|
||||
m := new(dns.Msg)
|
||||
m.SetQuestion(dns.Fqdn(domain), dns.TypeA)
|
||||
m.RecursionDesired = true
|
||||
r, _, err := c.Exchange(m, a.dnsAddrress)
|
||||
if r == nil {
|
||||
return
|
||||
}
|
||||
if r.Rcode != dns.RcodeSuccess {
|
||||
err = fmt.Errorf(" *** invalid answer name %s after A query for %s", domain, a.dnsAddrress)
|
||||
return
|
||||
}
|
||||
for _, answer := range r.Answer {
|
||||
if answer.Header().Rrtype == dns.TypeA {
|
||||
info := strings.Fields(answer.String())
|
||||
if len(info) >= 5 {
|
||||
ip = info[4]
|
||||
_item := item.(*DomainResolverItem)
|
||||
(*_item).expiredAt = time.Now().Unix() + int64(a.ttl)
|
||||
(*_item).ip = ip
|
||||
a.data.Set(domain, item)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
func (a *DomainResolver) PrintData() {
|
||||
for k, item := range a.data.Items() {
|
||||
d := item.(*DomainResolverItem)
|
||||
fmt.Printf("%s:ip[%s],domain[%s],expired at[%d]\n", k, (*d).ip, (*d).domain, (*d).expiredAt)
|
||||
}
|
||||
}
|
||||
func NewCompStream(conn net.Conn) *CompStream {
|
||||
c := new(CompStream)
|
||||
c.conn = conn
|
||||
c.w = snappy.NewBufferedWriter(conn)
|
||||
c.r = snappy.NewReader(conn)
|
||||
return c
|
||||
}
|
||||
|
||||
type CompStream struct {
|
||||
conn net.Conn
|
||||
w *snappy.Writer
|
||||
r *snappy.Reader
|
||||
}
|
||||
|
||||
func (c *CompStream) Read(p []byte) (n int, err error) {
|
||||
return c.r.Read(p)
|
||||
}
|
||||
|
||||
func (c *CompStream) Write(p []byte) (n int, err error) {
|
||||
n, err = c.w.Write(p)
|
||||
err = c.w.Flush()
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (c *CompStream) Close() error {
|
||||
return c.conn.Close()
|
||||
}
|
||||
func (c *CompStream) LocalAddr() net.Addr {
|
||||
return c.conn.LocalAddr()
|
||||
}
|
||||
func (c *CompStream) RemoteAddr() net.Addr {
|
||||
return c.conn.RemoteAddr()
|
||||
}
|
||||
func (c *CompStream) SetDeadline(t time.Time) error {
|
||||
return c.conn.SetDeadline(t)
|
||||
}
|
||||
func (c *CompStream) SetReadDeadline(t time.Time) error {
|
||||
return c.conn.SetReadDeadline(t)
|
||||
}
|
||||
func (c *CompStream) SetWriteDeadline(t time.Time) error {
|
||||
return c.conn.SetWriteDeadline(t)
|
||||
}
|
||||
|
||||
27
vendor/github.com/alecthomas/template/LICENSE
generated
vendored
Normal file
27
vendor/github.com/alecthomas/template/LICENSE
generated
vendored
Normal file
@ -0,0 +1,27 @@
|
||||
Copyright (c) 2012 The Go Authors. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
25
vendor/github.com/alecthomas/template/README.md
generated
vendored
Normal file
25
vendor/github.com/alecthomas/template/README.md
generated
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
# Go's `text/template` package with newline elision
|
||||
|
||||
This is a fork of Go 1.4's [text/template](http://golang.org/pkg/text/template/) package with one addition: a backslash immediately after a closing delimiter will delete all subsequent newlines until a non-newline.
|
||||
|
||||
eg.
|
||||
|
||||
```
|
||||
{{if true}}\
|
||||
hello
|
||||
{{end}}\
|
||||
```
|
||||
|
||||
Will result in:
|
||||
|
||||
```
|
||||
hello\n
|
||||
```
|
||||
|
||||
Rather than:
|
||||
|
||||
```
|
||||
\n
|
||||
hello\n
|
||||
\n
|
||||
```
|
||||
406
vendor/github.com/alecthomas/template/doc.go
generated
vendored
Normal file
406
vendor/github.com/alecthomas/template/doc.go
generated
vendored
Normal file
@ -0,0 +1,406 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
/*
|
||||
Package template implements data-driven templates for generating textual output.
|
||||
|
||||
To generate HTML output, see package html/template, which has the same interface
|
||||
as this package but automatically secures HTML output against certain attacks.
|
||||
|
||||
Templates are executed by applying them to a data structure. Annotations in the
|
||||
template refer to elements of the data structure (typically a field of a struct
|
||||
or a key in a map) to control execution and derive values to be displayed.
|
||||
Execution of the template walks the structure and sets the cursor, represented
|
||||
by a period '.' and called "dot", to the value at the current location in the
|
||||
structure as execution proceeds.
|
||||
|
||||
The input text for a template is UTF-8-encoded text in any format.
|
||||
"Actions"--data evaluations or control structures--are delimited by
|
||||
"{{" and "}}"; all text outside actions is copied to the output unchanged.
|
||||
Actions may not span newlines, although comments can.
|
||||
|
||||
Once parsed, a template may be executed safely in parallel.
|
||||
|
||||
Here is a trivial example that prints "17 items are made of wool".
|
||||
|
||||
type Inventory struct {
|
||||
Material string
|
||||
Count uint
|
||||
}
|
||||
sweaters := Inventory{"wool", 17}
|
||||
tmpl, err := template.New("test").Parse("{{.Count}} items are made of {{.Material}}")
|
||||
if err != nil { panic(err) }
|
||||
err = tmpl.Execute(os.Stdout, sweaters)
|
||||
if err != nil { panic(err) }
|
||||
|
||||
More intricate examples appear below.
|
||||
|
||||
Actions
|
||||
|
||||
Here is the list of actions. "Arguments" and "pipelines" are evaluations of
|
||||
data, defined in detail below.
|
||||
|
||||
*/
|
||||
// {{/* a comment */}}
|
||||
// A comment; discarded. May contain newlines.
|
||||
// Comments do not nest and must start and end at the
|
||||
// delimiters, as shown here.
|
||||
/*
|
||||
|
||||
{{pipeline}}
|
||||
The default textual representation of the value of the pipeline
|
||||
is copied to the output.
|
||||
|
||||
{{if pipeline}} T1 {{end}}
|
||||
If the value of the pipeline is empty, no output is generated;
|
||||
otherwise, T1 is executed. The empty values are false, 0, any
|
||||
nil pointer or interface value, and any array, slice, map, or
|
||||
string of length zero.
|
||||
Dot is unaffected.
|
||||
|
||||
{{if pipeline}} T1 {{else}} T0 {{end}}
|
||||
If the value of the pipeline is empty, T0 is executed;
|
||||
otherwise, T1 is executed. Dot is unaffected.
|
||||
|
||||
{{if pipeline}} T1 {{else if pipeline}} T0 {{end}}
|
||||
To simplify the appearance of if-else chains, the else action
|
||||
of an if may include another if directly; the effect is exactly
|
||||
the same as writing
|
||||
{{if pipeline}} T1 {{else}}{{if pipeline}} T0 {{end}}{{end}}
|
||||
|
||||
{{range pipeline}} T1 {{end}}
|
||||
The value of the pipeline must be an array, slice, map, or channel.
|
||||
If the value of the pipeline has length zero, nothing is output;
|
||||
otherwise, dot is set to the successive elements of the array,
|
||||
slice, or map and T1 is executed. If the value is a map and the
|
||||
keys are of basic type with a defined order ("comparable"), the
|
||||
elements will be visited in sorted key order.
|
||||
|
||||
{{range pipeline}} T1 {{else}} T0 {{end}}
|
||||
The value of the pipeline must be an array, slice, map, or channel.
|
||||
If the value of the pipeline has length zero, dot is unaffected and
|
||||
T0 is executed; otherwise, dot is set to the successive elements
|
||||
of the array, slice, or map and T1 is executed.
|
||||
|
||||
{{template "name"}}
|
||||
The template with the specified name is executed with nil data.
|
||||
|
||||
{{template "name" pipeline}}
|
||||
The template with the specified name is executed with dot set
|
||||
to the value of the pipeline.
|
||||
|
||||
{{with pipeline}} T1 {{end}}
|
||||
If the value of the pipeline is empty, no output is generated;
|
||||
otherwise, dot is set to the value of the pipeline and T1 is
|
||||
executed.
|
||||
|
||||
{{with pipeline}} T1 {{else}} T0 {{end}}
|
||||
If the value of the pipeline is empty, dot is unaffected and T0
|
||||
is executed; otherwise, dot is set to the value of the pipeline
|
||||
and T1 is executed.
|
||||
|
||||
Arguments
|
||||
|
||||
An argument is a simple value, denoted by one of the following.
|
||||
|
||||
- A boolean, string, character, integer, floating-point, imaginary
|
||||
or complex constant in Go syntax. These behave like Go's untyped
|
||||
constants, although raw strings may not span newlines.
|
||||
- The keyword nil, representing an untyped Go nil.
|
||||
- The character '.' (period):
|
||||
.
|
||||
The result is the value of dot.
|
||||
- A variable name, which is a (possibly empty) alphanumeric string
|
||||
preceded by a dollar sign, such as
|
||||
$piOver2
|
||||
or
|
||||
$
|
||||
The result is the value of the variable.
|
||||
Variables are described below.
|
||||
- The name of a field of the data, which must be a struct, preceded
|
||||
by a period, such as
|
||||
.Field
|
||||
The result is the value of the field. Field invocations may be
|
||||
chained:
|
||||
.Field1.Field2
|
||||
Fields can also be evaluated on variables, including chaining:
|
||||
$x.Field1.Field2
|
||||
- The name of a key of the data, which must be a map, preceded
|
||||
by a period, such as
|
||||
.Key
|
||||
The result is the map element value indexed by the key.
|
||||
Key invocations may be chained and combined with fields to any
|
||||
depth:
|
||||
.Field1.Key1.Field2.Key2
|
||||
Although the key must be an alphanumeric identifier, unlike with
|
||||
field names they do not need to start with an upper case letter.
|
||||
Keys can also be evaluated on variables, including chaining:
|
||||
$x.key1.key2
|
||||
- The name of a niladic method of the data, preceded by a period,
|
||||
such as
|
||||
.Method
|
||||
The result is the value of invoking the method with dot as the
|
||||
receiver, dot.Method(). Such a method must have one return value (of
|
||||
any type) or two return values, the second of which is an error.
|
||||
If it has two and the returned error is non-nil, execution terminates
|
||||
and an error is returned to the caller as the value of Execute.
|
||||
Method invocations may be chained and combined with fields and keys
|
||||
to any depth:
|
||||
.Field1.Key1.Method1.Field2.Key2.Method2
|
||||
Methods can also be evaluated on variables, including chaining:
|
||||
$x.Method1.Field
|
||||
- The name of a niladic function, such as
|
||||
fun
|
||||
The result is the value of invoking the function, fun(). The return
|
||||
types and values behave as in methods. Functions and function
|
||||
names are described below.
|
||||
- A parenthesized instance of one the above, for grouping. The result
|
||||
may be accessed by a field or map key invocation.
|
||||
print (.F1 arg1) (.F2 arg2)
|
||||
(.StructValuedMethod "arg").Field
|
||||
|
||||
Arguments may evaluate to any type; if they are pointers the implementation
|
||||
automatically indirects to the base type when required.
|
||||
If an evaluation yields a function value, such as a function-valued
|
||||
field of a struct, the function is not invoked automatically, but it
|
||||
can be used as a truth value for an if action and the like. To invoke
|
||||
it, use the call function, defined below.
|
||||
|
||||
A pipeline is a possibly chained sequence of "commands". A command is a simple
|
||||
value (argument) or a function or method call, possibly with multiple arguments:
|
||||
|
||||
Argument
|
||||
The result is the value of evaluating the argument.
|
||||
.Method [Argument...]
|
||||
The method can be alone or the last element of a chain but,
|
||||
unlike methods in the middle of a chain, it can take arguments.
|
||||
The result is the value of calling the method with the
|
||||
arguments:
|
||||
dot.Method(Argument1, etc.)
|
||||
functionName [Argument...]
|
||||
The result is the value of calling the function associated
|
||||
with the name:
|
||||
function(Argument1, etc.)
|
||||
Functions and function names are described below.
|
||||
|
||||
Pipelines
|
||||
|
||||
A pipeline may be "chained" by separating a sequence of commands with pipeline
|
||||
characters '|'. In a chained pipeline, the result of the each command is
|
||||
passed as the last argument of the following command. The output of the final
|
||||
command in the pipeline is the value of the pipeline.
|
||||
|
||||
The output of a command will be either one value or two values, the second of
|
||||
which has type error. If that second value is present and evaluates to
|
||||
non-nil, execution terminates and the error is returned to the caller of
|
||||
Execute.
|
||||
|
||||
Variables
|
||||
|
||||
A pipeline inside an action may initialize a variable to capture the result.
|
||||
The initialization has syntax
|
||||
|
||||
$variable := pipeline
|
||||
|
||||
where $variable is the name of the variable. An action that declares a
|
||||
variable produces no output.
|
||||
|
||||
If a "range" action initializes a variable, the variable is set to the
|
||||
successive elements of the iteration. Also, a "range" may declare two
|
||||
variables, separated by a comma:
|
||||
|
||||
range $index, $element := pipeline
|
||||
|
||||
in which case $index and $element are set to the successive values of the
|
||||
array/slice index or map key and element, respectively. Note that if there is
|
||||
only one variable, it is assigned the element; this is opposite to the
|
||||
convention in Go range clauses.
|
||||
|
||||
A variable's scope extends to the "end" action of the control structure ("if",
|
||||
"with", or "range") in which it is declared, or to the end of the template if
|
||||
there is no such control structure. A template invocation does not inherit
|
||||
variables from the point of its invocation.
|
||||
|
||||
When execution begins, $ is set to the data argument passed to Execute, that is,
|
||||
to the starting value of dot.
|
||||
|
||||
Examples
|
||||
|
||||
Here are some example one-line templates demonstrating pipelines and variables.
|
||||
All produce the quoted word "output":
|
||||
|
||||
{{"\"output\""}}
|
||||
A string constant.
|
||||
{{`"output"`}}
|
||||
A raw string constant.
|
||||
{{printf "%q" "output"}}
|
||||
A function call.
|
||||
{{"output" | printf "%q"}}
|
||||
A function call whose final argument comes from the previous
|
||||
command.
|
||||
{{printf "%q" (print "out" "put")}}
|
||||
A parenthesized argument.
|
||||
{{"put" | printf "%s%s" "out" | printf "%q"}}
|
||||
A more elaborate call.
|
||||
{{"output" | printf "%s" | printf "%q"}}
|
||||
A longer chain.
|
||||
{{with "output"}}{{printf "%q" .}}{{end}}
|
||||
A with action using dot.
|
||||
{{with $x := "output" | printf "%q"}}{{$x}}{{end}}
|
||||
A with action that creates and uses a variable.
|
||||
{{with $x := "output"}}{{printf "%q" $x}}{{end}}
|
||||
A with action that uses the variable in another action.
|
||||
{{with $x := "output"}}{{$x | printf "%q"}}{{end}}
|
||||
The same, but pipelined.
|
||||
|
||||
Functions
|
||||
|
||||
During execution functions are found in two function maps: first in the
|
||||
template, then in the global function map. By default, no functions are defined
|
||||
in the template but the Funcs method can be used to add them.
|
||||
|
||||
Predefined global functions are named as follows.
|
||||
|
||||
and
|
||||
Returns the boolean AND of its arguments by returning the
|
||||
first empty argument or the last argument, that is,
|
||||
"and x y" behaves as "if x then y else x". All the
|
||||
arguments are evaluated.
|
||||
call
|
||||
Returns the result of calling the first argument, which
|
||||
must be a function, with the remaining arguments as parameters.
|
||||
Thus "call .X.Y 1 2" is, in Go notation, dot.X.Y(1, 2) where
|
||||
Y is a func-valued field, map entry, or the like.
|
||||
The first argument must be the result of an evaluation
|
||||
that yields a value of function type (as distinct from
|
||||
a predefined function such as print). The function must
|
||||
return either one or two result values, the second of which
|
||||
is of type error. If the arguments don't match the function
|
||||
or the returned error value is non-nil, execution stops.
|
||||
html
|
||||
Returns the escaped HTML equivalent of the textual
|
||||
representation of its arguments.
|
||||
index
|
||||
Returns the result of indexing its first argument by the
|
||||
following arguments. Thus "index x 1 2 3" is, in Go syntax,
|
||||
x[1][2][3]. Each indexed item must be a map, slice, or array.
|
||||
js
|
||||
Returns the escaped JavaScript equivalent of the textual
|
||||
representation of its arguments.
|
||||
len
|
||||
Returns the integer length of its argument.
|
||||
not
|
||||
Returns the boolean negation of its single argument.
|
||||
or
|
||||
Returns the boolean OR of its arguments by returning the
|
||||
first non-empty argument or the last argument, that is,
|
||||
"or x y" behaves as "if x then x else y". All the
|
||||
arguments are evaluated.
|
||||
print
|
||||
An alias for fmt.Sprint
|
||||
printf
|
||||
An alias for fmt.Sprintf
|
||||
println
|
||||
An alias for fmt.Sprintln
|
||||
urlquery
|
||||
Returns the escaped value of the textual representation of
|
||||
its arguments in a form suitable for embedding in a URL query.
|
||||
|
||||
The boolean functions take any zero value to be false and a non-zero
|
||||
value to be true.
|
||||
|
||||
There is also a set of binary comparison operators defined as
|
||||
functions:
|
||||
|
||||
eq
|
||||
Returns the boolean truth of arg1 == arg2
|
||||
ne
|
||||
Returns the boolean truth of arg1 != arg2
|
||||
lt
|
||||
Returns the boolean truth of arg1 < arg2
|
||||
le
|
||||
Returns the boolean truth of arg1 <= arg2
|
||||
gt
|
||||
Returns the boolean truth of arg1 > arg2
|
||||
ge
|
||||
Returns the boolean truth of arg1 >= arg2
|
||||
|
||||
For simpler multi-way equality tests, eq (only) accepts two or more
|
||||
arguments and compares the second and subsequent to the first,
|
||||
returning in effect
|
||||
|
||||
arg1==arg2 || arg1==arg3 || arg1==arg4 ...
|
||||
|
||||
(Unlike with || in Go, however, eq is a function call and all the
|
||||
arguments will be evaluated.)
|
||||
|
||||
The comparison functions work on basic types only (or named basic
|
||||
types, such as "type Celsius float32"). They implement the Go rules
|
||||
for comparison of values, except that size and exact type are
|
||||
ignored, so any integer value, signed or unsigned, may be compared
|
||||
with any other integer value. (The arithmetic value is compared,
|
||||
not the bit pattern, so all negative integers are less than all
|
||||
unsigned integers.) However, as usual, one may not compare an int
|
||||
with a float32 and so on.
|
||||
|
||||
Associated templates
|
||||
|
||||
Each template is named by a string specified when it is created. Also, each
|
||||
template is associated with zero or more other templates that it may invoke by
|
||||
name; such associations are transitive and form a name space of templates.
|
||||
|
||||
A template may use a template invocation to instantiate another associated
|
||||
template; see the explanation of the "template" action above. The name must be
|
||||
that of a template associated with the template that contains the invocation.
|
||||
|
||||
Nested template definitions
|
||||
|
||||
When parsing a template, another template may be defined and associated with the
|
||||
template being parsed. Template definitions must appear at the top level of the
|
||||
template, much like global variables in a Go program.
|
||||
|
||||
The syntax of such definitions is to surround each template declaration with a
|
||||
"define" and "end" action.
|
||||
|
||||
The define action names the template being created by providing a string
|
||||
constant. Here is a simple example:
|
||||
|
||||
`{{define "T1"}}ONE{{end}}
|
||||
{{define "T2"}}TWO{{end}}
|
||||
{{define "T3"}}{{template "T1"}} {{template "T2"}}{{end}}
|
||||
{{template "T3"}}`
|
||||
|
||||
This defines two templates, T1 and T2, and a third T3 that invokes the other two
|
||||
when it is executed. Finally it invokes T3. If executed this template will
|
||||
produce the text
|
||||
|
||||
ONE TWO
|
||||
|
||||
By construction, a template may reside in only one association. If it's
|
||||
necessary to have a template addressable from multiple associations, the
|
||||
template definition must be parsed multiple times to create distinct *Template
|
||||
values, or must be copied with the Clone or AddParseTree method.
|
||||
|
||||
Parse may be called multiple times to assemble the various associated templates;
|
||||
see the ParseFiles and ParseGlob functions and methods for simple ways to parse
|
||||
related templates stored in files.
|
||||
|
||||
A template may be executed directly or through ExecuteTemplate, which executes
|
||||
an associated template identified by name. To invoke our example above, we
|
||||
might write,
|
||||
|
||||
err := tmpl.Execute(os.Stdout, "no data needed")
|
||||
if err != nil {
|
||||
log.Fatalf("execution failed: %s", err)
|
||||
}
|
||||
|
||||
or to invoke a particular template explicitly by name,
|
||||
|
||||
err := tmpl.ExecuteTemplate(os.Stdout, "T2", "no data needed")
|
||||
if err != nil {
|
||||
log.Fatalf("execution failed: %s", err)
|
||||
}
|
||||
|
||||
*/
|
||||
package template
|
||||
845
vendor/github.com/alecthomas/template/exec.go
generated
vendored
Normal file
845
vendor/github.com/alecthomas/template/exec.go
generated
vendored
Normal file
@ -0,0 +1,845 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package template
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/alecthomas/template/parse"
|
||||
)
|
||||
|
||||
// state represents the state of an execution. It's not part of the
|
||||
// template so that multiple executions of the same template
|
||||
// can execute in parallel.
|
||||
type state struct {
|
||||
tmpl *Template
|
||||
wr io.Writer
|
||||
node parse.Node // current node, for errors
|
||||
vars []variable // push-down stack of variable values.
|
||||
}
|
||||
|
||||
// variable holds the dynamic value of a variable such as $, $x etc.
|
||||
type variable struct {
|
||||
name string
|
||||
value reflect.Value
|
||||
}
|
||||
|
||||
// push pushes a new variable on the stack.
|
||||
func (s *state) push(name string, value reflect.Value) {
|
||||
s.vars = append(s.vars, variable{name, value})
|
||||
}
|
||||
|
||||
// mark returns the length of the variable stack.
|
||||
func (s *state) mark() int {
|
||||
return len(s.vars)
|
||||
}
|
||||
|
||||
// pop pops the variable stack up to the mark.
|
||||
func (s *state) pop(mark int) {
|
||||
s.vars = s.vars[0:mark]
|
||||
}
|
||||
|
||||
// setVar overwrites the top-nth variable on the stack. Used by range iterations.
|
||||
func (s *state) setVar(n int, value reflect.Value) {
|
||||
s.vars[len(s.vars)-n].value = value
|
||||
}
|
||||
|
||||
// varValue returns the value of the named variable.
|
||||
func (s *state) varValue(name string) reflect.Value {
|
||||
for i := s.mark() - 1; i >= 0; i-- {
|
||||
if s.vars[i].name == name {
|
||||
return s.vars[i].value
|
||||
}
|
||||
}
|
||||
s.errorf("undefined variable: %s", name)
|
||||
return zero
|
||||
}
|
||||
|
||||
var zero reflect.Value
|
||||
|
||||
// at marks the state to be on node n, for error reporting.
|
||||
func (s *state) at(node parse.Node) {
|
||||
s.node = node
|
||||
}
|
||||
|
||||
// doublePercent returns the string with %'s replaced by %%, if necessary,
|
||||
// so it can be used safely inside a Printf format string.
|
||||
func doublePercent(str string) string {
|
||||
if strings.Contains(str, "%") {
|
||||
str = strings.Replace(str, "%", "%%", -1)
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
// errorf formats the error and terminates processing.
|
||||
func (s *state) errorf(format string, args ...interface{}) {
|
||||
name := doublePercent(s.tmpl.Name())
|
||||
if s.node == nil {
|
||||
format = fmt.Sprintf("template: %s: %s", name, format)
|
||||
} else {
|
||||
location, context := s.tmpl.ErrorContext(s.node)
|
||||
format = fmt.Sprintf("template: %s: executing %q at <%s>: %s", location, name, doublePercent(context), format)
|
||||
}
|
||||
panic(fmt.Errorf(format, args...))
|
||||
}
|
||||
|
||||
// errRecover is the handler that turns panics into returns from the top
|
||||
// level of Parse.
|
||||
func errRecover(errp *error) {
|
||||
e := recover()
|
||||
if e != nil {
|
||||
switch err := e.(type) {
|
||||
case runtime.Error:
|
||||
panic(e)
|
||||
case error:
|
||||
*errp = err
|
||||
default:
|
||||
panic(e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ExecuteTemplate applies the template associated with t that has the given name
|
||||
// to the specified data object and writes the output to wr.
|
||||
// If an error occurs executing the template or writing its output,
|
||||
// execution stops, but partial results may already have been written to
|
||||
// the output writer.
|
||||
// A template may be executed safely in parallel.
|
||||
func (t *Template) ExecuteTemplate(wr io.Writer, name string, data interface{}) error {
|
||||
tmpl := t.tmpl[name]
|
||||
if tmpl == nil {
|
||||
return fmt.Errorf("template: no template %q associated with template %q", name, t.name)
|
||||
}
|
||||
return tmpl.Execute(wr, data)
|
||||
}
|
||||
|
||||
// Execute applies a parsed template to the specified data object,
|
||||
// and writes the output to wr.
|
||||
// If an error occurs executing the template or writing its output,
|
||||
// execution stops, but partial results may already have been written to
|
||||
// the output writer.
|
||||
// A template may be executed safely in parallel.
|
||||
func (t *Template) Execute(wr io.Writer, data interface{}) (err error) {
|
||||
defer errRecover(&err)
|
||||
value := reflect.ValueOf(data)
|
||||
state := &state{
|
||||
tmpl: t,
|
||||
wr: wr,
|
||||
vars: []variable{{"$", value}},
|
||||
}
|
||||
t.init()
|
||||
if t.Tree == nil || t.Root == nil {
|
||||
var b bytes.Buffer
|
||||
for name, tmpl := range t.tmpl {
|
||||
if tmpl.Tree == nil || tmpl.Root == nil {
|
||||
continue
|
||||
}
|
||||
if b.Len() > 0 {
|
||||
b.WriteString(", ")
|
||||
}
|
||||
fmt.Fprintf(&b, "%q", name)
|
||||
}
|
||||
var s string
|
||||
if b.Len() > 0 {
|
||||
s = "; defined templates are: " + b.String()
|
||||
}
|
||||
state.errorf("%q is an incomplete or empty template%s", t.Name(), s)
|
||||
}
|
||||
state.walk(value, t.Root)
|
||||
return
|
||||
}
|
||||
|
||||
// Walk functions step through the major pieces of the template structure,
|
||||
// generating output as they go.
|
||||
func (s *state) walk(dot reflect.Value, node parse.Node) {
|
||||
s.at(node)
|
||||
switch node := node.(type) {
|
||||
case *parse.ActionNode:
|
||||
// Do not pop variables so they persist until next end.
|
||||
// Also, if the action declares variables, don't print the result.
|
||||
val := s.evalPipeline(dot, node.Pipe)
|
||||
if len(node.Pipe.Decl) == 0 {
|
||||
s.printValue(node, val)
|
||||
}
|
||||
case *parse.IfNode:
|
||||
s.walkIfOrWith(parse.NodeIf, dot, node.Pipe, node.List, node.ElseList)
|
||||
case *parse.ListNode:
|
||||
for _, node := range node.Nodes {
|
||||
s.walk(dot, node)
|
||||
}
|
||||
case *parse.RangeNode:
|
||||
s.walkRange(dot, node)
|
||||
case *parse.TemplateNode:
|
||||
s.walkTemplate(dot, node)
|
||||
case *parse.TextNode:
|
||||
if _, err := s.wr.Write(node.Text); err != nil {
|
||||
s.errorf("%s", err)
|
||||
}
|
||||
case *parse.WithNode:
|
||||
s.walkIfOrWith(parse.NodeWith, dot, node.Pipe, node.List, node.ElseList)
|
||||
default:
|
||||
s.errorf("unknown node: %s", node)
|
||||
}
|
||||
}
|
||||
|
||||
// walkIfOrWith walks an 'if' or 'with' node. The two control structures
|
||||
// are identical in behavior except that 'with' sets dot.
|
||||
func (s *state) walkIfOrWith(typ parse.NodeType, dot reflect.Value, pipe *parse.PipeNode, list, elseList *parse.ListNode) {
|
||||
defer s.pop(s.mark())
|
||||
val := s.evalPipeline(dot, pipe)
|
||||
truth, ok := isTrue(val)
|
||||
if !ok {
|
||||
s.errorf("if/with can't use %v", val)
|
||||
}
|
||||
if truth {
|
||||
if typ == parse.NodeWith {
|
||||
s.walk(val, list)
|
||||
} else {
|
||||
s.walk(dot, list)
|
||||
}
|
||||
} else if elseList != nil {
|
||||
s.walk(dot, elseList)
|
||||
}
|
||||
}
|
||||
|
||||
// isTrue reports whether the value is 'true', in the sense of not the zero of its type,
|
||||
// and whether the value has a meaningful truth value.
|
||||
func isTrue(val reflect.Value) (truth, ok bool) {
|
||||
if !val.IsValid() {
|
||||
// Something like var x interface{}, never set. It's a form of nil.
|
||||
return false, true
|
||||
}
|
||||
switch val.Kind() {
|
||||
case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
|
||||
truth = val.Len() > 0
|
||||
case reflect.Bool:
|
||||
truth = val.Bool()
|
||||
case reflect.Complex64, reflect.Complex128:
|
||||
truth = val.Complex() != 0
|
||||
case reflect.Chan, reflect.Func, reflect.Ptr, reflect.Interface:
|
||||
truth = !val.IsNil()
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
truth = val.Int() != 0
|
||||
case reflect.Float32, reflect.Float64:
|
||||
truth = val.Float() != 0
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
truth = val.Uint() != 0
|
||||
case reflect.Struct:
|
||||
truth = true // Struct values are always true.
|
||||
default:
|
||||
return
|
||||
}
|
||||
return truth, true
|
||||
}
|
||||
|
||||
func (s *state) walkRange(dot reflect.Value, r *parse.RangeNode) {
|
||||
s.at(r)
|
||||
defer s.pop(s.mark())
|
||||
val, _ := indirect(s.evalPipeline(dot, r.Pipe))
|
||||
// mark top of stack before any variables in the body are pushed.
|
||||
mark := s.mark()
|
||||
oneIteration := func(index, elem reflect.Value) {
|
||||
// Set top var (lexically the second if there are two) to the element.
|
||||
if len(r.Pipe.Decl) > 0 {
|
||||
s.setVar(1, elem)
|
||||
}
|
||||
// Set next var (lexically the first if there are two) to the index.
|
||||
if len(r.Pipe.Decl) > 1 {
|
||||
s.setVar(2, index)
|
||||
}
|
||||
s.walk(elem, r.List)
|
||||
s.pop(mark)
|
||||
}
|
||||
switch val.Kind() {
|
||||
case reflect.Array, reflect.Slice:
|
||||
if val.Len() == 0 {
|
||||
break
|
||||
}
|
||||
for i := 0; i < val.Len(); i++ {
|
||||
oneIteration(reflect.ValueOf(i), val.Index(i))
|
||||
}
|
||||
return
|
||||
case reflect.Map:
|
||||
if val.Len() == 0 {
|
||||
break
|
||||
}
|
||||
for _, key := range sortKeys(val.MapKeys()) {
|
||||
oneIteration(key, val.MapIndex(key))
|
||||
}
|
||||
return
|
||||
case reflect.Chan:
|
||||
if val.IsNil() {
|
||||
break
|
||||
}
|
||||
i := 0
|
||||
for ; ; i++ {
|
||||
elem, ok := val.Recv()
|
||||
if !ok {
|
||||
break
|
||||
}
|
||||
oneIteration(reflect.ValueOf(i), elem)
|
||||
}
|
||||
if i == 0 {
|
||||
break
|
||||
}
|
||||
return
|
||||
case reflect.Invalid:
|
||||
break // An invalid value is likely a nil map, etc. and acts like an empty map.
|
||||
default:
|
||||
s.errorf("range can't iterate over %v", val)
|
||||
}
|
||||
if r.ElseList != nil {
|
||||
s.walk(dot, r.ElseList)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *state) walkTemplate(dot reflect.Value, t *parse.TemplateNode) {
|
||||
s.at(t)
|
||||
tmpl := s.tmpl.tmpl[t.Name]
|
||||
if tmpl == nil {
|
||||
s.errorf("template %q not defined", t.Name)
|
||||
}
|
||||
// Variables declared by the pipeline persist.
|
||||
dot = s.evalPipeline(dot, t.Pipe)
|
||||
newState := *s
|
||||
newState.tmpl = tmpl
|
||||
// No dynamic scoping: template invocations inherit no variables.
|
||||
newState.vars = []variable{{"$", dot}}
|
||||
newState.walk(dot, tmpl.Root)
|
||||
}
|
||||
|
||||
// Eval functions evaluate pipelines, commands, and their elements and extract
|
||||
// values from the data structure by examining fields, calling methods, and so on.
|
||||
// The printing of those values happens only through walk functions.
|
||||
|
||||
// evalPipeline returns the value acquired by evaluating a pipeline. If the
|
||||
// pipeline has a variable declaration, the variable will be pushed on the
|
||||
// stack. Callers should therefore pop the stack after they are finished
|
||||
// executing commands depending on the pipeline value.
|
||||
func (s *state) evalPipeline(dot reflect.Value, pipe *parse.PipeNode) (value reflect.Value) {
|
||||
if pipe == nil {
|
||||
return
|
||||
}
|
||||
s.at(pipe)
|
||||
for _, cmd := range pipe.Cmds {
|
||||
value = s.evalCommand(dot, cmd, value) // previous value is this one's final arg.
|
||||
// If the object has type interface{}, dig down one level to the thing inside.
|
||||
if value.Kind() == reflect.Interface && value.Type().NumMethod() == 0 {
|
||||
value = reflect.ValueOf(value.Interface()) // lovely!
|
||||
}
|
||||
}
|
||||
for _, variable := range pipe.Decl {
|
||||
s.push(variable.Ident[0], value)
|
||||
}
|
||||
return value
|
||||
}
|
||||
|
||||
func (s *state) notAFunction(args []parse.Node, final reflect.Value) {
|
||||
if len(args) > 1 || final.IsValid() {
|
||||
s.errorf("can't give argument to non-function %s", args[0])
|
||||
}
|
||||
}
|
||||
|
||||
func (s *state) evalCommand(dot reflect.Value, cmd *parse.CommandNode, final reflect.Value) reflect.Value {
|
||||
firstWord := cmd.Args[0]
|
||||
switch n := firstWord.(type) {
|
||||
case *parse.FieldNode:
|
||||
return s.evalFieldNode(dot, n, cmd.Args, final)
|
||||
case *parse.ChainNode:
|
||||
return s.evalChainNode(dot, n, cmd.Args, final)
|
||||
case *parse.IdentifierNode:
|
||||
// Must be a function.
|
||||
return s.evalFunction(dot, n, cmd, cmd.Args, final)
|
||||
case *parse.PipeNode:
|
||||
// Parenthesized pipeline. The arguments are all inside the pipeline; final is ignored.
|
||||
return s.evalPipeline(dot, n)
|
||||
case *parse.VariableNode:
|
||||
return s.evalVariableNode(dot, n, cmd.Args, final)
|
||||
}
|
||||
s.at(firstWord)
|
||||
s.notAFunction(cmd.Args, final)
|
||||
switch word := firstWord.(type) {
|
||||
case *parse.BoolNode:
|
||||
return reflect.ValueOf(word.True)
|
||||
case *parse.DotNode:
|
||||
return dot
|
||||
case *parse.NilNode:
|
||||
s.errorf("nil is not a command")
|
||||
case *parse.NumberNode:
|
||||
return s.idealConstant(word)
|
||||
case *parse.StringNode:
|
||||
return reflect.ValueOf(word.Text)
|
||||
}
|
||||
s.errorf("can't evaluate command %q", firstWord)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
// idealConstant is called to return the value of a number in a context where
|
||||
// we don't know the type. In that case, the syntax of the number tells us
|
||||
// its type, and we use Go rules to resolve. Note there is no such thing as
|
||||
// a uint ideal constant in this situation - the value must be of int type.
|
||||
func (s *state) idealConstant(constant *parse.NumberNode) reflect.Value {
|
||||
// These are ideal constants but we don't know the type
|
||||
// and we have no context. (If it was a method argument,
|
||||
// we'd know what we need.) The syntax guides us to some extent.
|
||||
s.at(constant)
|
||||
switch {
|
||||
case constant.IsComplex:
|
||||
return reflect.ValueOf(constant.Complex128) // incontrovertible.
|
||||
case constant.IsFloat && !isHexConstant(constant.Text) && strings.IndexAny(constant.Text, ".eE") >= 0:
|
||||
return reflect.ValueOf(constant.Float64)
|
||||
case constant.IsInt:
|
||||
n := int(constant.Int64)
|
||||
if int64(n) != constant.Int64 {
|
||||
s.errorf("%s overflows int", constant.Text)
|
||||
}
|
||||
return reflect.ValueOf(n)
|
||||
case constant.IsUint:
|
||||
s.errorf("%s overflows int", constant.Text)
|
||||
}
|
||||
return zero
|
||||
}
|
||||
|
||||
func isHexConstant(s string) bool {
|
||||
return len(s) > 2 && s[0] == '0' && (s[1] == 'x' || s[1] == 'X')
|
||||
}
|
||||
|
||||
func (s *state) evalFieldNode(dot reflect.Value, field *parse.FieldNode, args []parse.Node, final reflect.Value) reflect.Value {
|
||||
s.at(field)
|
||||
return s.evalFieldChain(dot, dot, field, field.Ident, args, final)
|
||||
}
|
||||
|
||||
func (s *state) evalChainNode(dot reflect.Value, chain *parse.ChainNode, args []parse.Node, final reflect.Value) reflect.Value {
|
||||
s.at(chain)
|
||||
// (pipe).Field1.Field2 has pipe as .Node, fields as .Field. Eval the pipeline, then the fields.
|
||||
pipe := s.evalArg(dot, nil, chain.Node)
|
||||
if len(chain.Field) == 0 {
|
||||
s.errorf("internal error: no fields in evalChainNode")
|
||||
}
|
||||
return s.evalFieldChain(dot, pipe, chain, chain.Field, args, final)
|
||||
}
|
||||
|
||||
func (s *state) evalVariableNode(dot reflect.Value, variable *parse.VariableNode, args []parse.Node, final reflect.Value) reflect.Value {
|
||||
// $x.Field has $x as the first ident, Field as the second. Eval the var, then the fields.
|
||||
s.at(variable)
|
||||
value := s.varValue(variable.Ident[0])
|
||||
if len(variable.Ident) == 1 {
|
||||
s.notAFunction(args, final)
|
||||
return value
|
||||
}
|
||||
return s.evalFieldChain(dot, value, variable, variable.Ident[1:], args, final)
|
||||
}
|
||||
|
||||
// evalFieldChain evaluates .X.Y.Z possibly followed by arguments.
|
||||
// dot is the environment in which to evaluate arguments, while
|
||||
// receiver is the value being walked along the chain.
|
||||
func (s *state) evalFieldChain(dot, receiver reflect.Value, node parse.Node, ident []string, args []parse.Node, final reflect.Value) reflect.Value {
|
||||
n := len(ident)
|
||||
for i := 0; i < n-1; i++ {
|
||||
receiver = s.evalField(dot, ident[i], node, nil, zero, receiver)
|
||||
}
|
||||
// Now if it's a method, it gets the arguments.
|
||||
return s.evalField(dot, ident[n-1], node, args, final, receiver)
|
||||
}
|
||||
|
||||
func (s *state) evalFunction(dot reflect.Value, node *parse.IdentifierNode, cmd parse.Node, args []parse.Node, final reflect.Value) reflect.Value {
|
||||
s.at(node)
|
||||
name := node.Ident
|
||||
function, ok := findFunction(name, s.tmpl)
|
||||
if !ok {
|
||||
s.errorf("%q is not a defined function", name)
|
||||
}
|
||||
return s.evalCall(dot, function, cmd, name, args, final)
|
||||
}
|
||||
|
||||
// evalField evaluates an expression like (.Field) or (.Field arg1 arg2).
|
||||
// The 'final' argument represents the return value from the preceding
|
||||
// value of the pipeline, if any.
|
||||
func (s *state) evalField(dot reflect.Value, fieldName string, node parse.Node, args []parse.Node, final, receiver reflect.Value) reflect.Value {
|
||||
if !receiver.IsValid() {
|
||||
return zero
|
||||
}
|
||||
typ := receiver.Type()
|
||||
receiver, _ = indirect(receiver)
|
||||
// Unless it's an interface, need to get to a value of type *T to guarantee
|
||||
// we see all methods of T and *T.
|
||||
ptr := receiver
|
||||
if ptr.Kind() != reflect.Interface && ptr.CanAddr() {
|
||||
ptr = ptr.Addr()
|
||||
}
|
||||
if method := ptr.MethodByName(fieldName); method.IsValid() {
|
||||
return s.evalCall(dot, method, node, fieldName, args, final)
|
||||
}
|
||||
hasArgs := len(args) > 1 || final.IsValid()
|
||||
// It's not a method; must be a field of a struct or an element of a map. The receiver must not be nil.
|
||||
receiver, isNil := indirect(receiver)
|
||||
if isNil {
|
||||
s.errorf("nil pointer evaluating %s.%s", typ, fieldName)
|
||||
}
|
||||
switch receiver.Kind() {
|
||||
case reflect.Struct:
|
||||
tField, ok := receiver.Type().FieldByName(fieldName)
|
||||
if ok {
|
||||
field := receiver.FieldByIndex(tField.Index)
|
||||
if tField.PkgPath != "" { // field is unexported
|
||||
s.errorf("%s is an unexported field of struct type %s", fieldName, typ)
|
||||
}
|
||||
// If it's a function, we must call it.
|
||||
if hasArgs {
|
||||
s.errorf("%s has arguments but cannot be invoked as function", fieldName)
|
||||
}
|
||||
return field
|
||||
}
|
||||
s.errorf("%s is not a field of struct type %s", fieldName, typ)
|
||||
case reflect.Map:
|
||||
// If it's a map, attempt to use the field name as a key.
|
||||
nameVal := reflect.ValueOf(fieldName)
|
||||
if nameVal.Type().AssignableTo(receiver.Type().Key()) {
|
||||
if hasArgs {
|
||||
s.errorf("%s is not a method but has arguments", fieldName)
|
||||
}
|
||||
return receiver.MapIndex(nameVal)
|
||||
}
|
||||
}
|
||||
s.errorf("can't evaluate field %s in type %s", fieldName, typ)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
var (
|
||||
errorType = reflect.TypeOf((*error)(nil)).Elem()
|
||||
fmtStringerType = reflect.TypeOf((*fmt.Stringer)(nil)).Elem()
|
||||
)
|
||||
|
||||
// evalCall executes a function or method call. If it's a method, fun already has the receiver bound, so
|
||||
// it looks just like a function call. The arg list, if non-nil, includes (in the manner of the shell), arg[0]
|
||||
// as the function itself.
|
||||
func (s *state) evalCall(dot, fun reflect.Value, node parse.Node, name string, args []parse.Node, final reflect.Value) reflect.Value {
|
||||
if args != nil {
|
||||
args = args[1:] // Zeroth arg is function name/node; not passed to function.
|
||||
}
|
||||
typ := fun.Type()
|
||||
numIn := len(args)
|
||||
if final.IsValid() {
|
||||
numIn++
|
||||
}
|
||||
numFixed := len(args)
|
||||
if typ.IsVariadic() {
|
||||
numFixed = typ.NumIn() - 1 // last arg is the variadic one.
|
||||
if numIn < numFixed {
|
||||
s.errorf("wrong number of args for %s: want at least %d got %d", name, typ.NumIn()-1, len(args))
|
||||
}
|
||||
} else if numIn < typ.NumIn()-1 || !typ.IsVariadic() && numIn != typ.NumIn() {
|
||||
s.errorf("wrong number of args for %s: want %d got %d", name, typ.NumIn(), len(args))
|
||||
}
|
||||
if !goodFunc(typ) {
|
||||
// TODO: This could still be a confusing error; maybe goodFunc should provide info.
|
||||
s.errorf("can't call method/function %q with %d results", name, typ.NumOut())
|
||||
}
|
||||
// Build the arg list.
|
||||
argv := make([]reflect.Value, numIn)
|
||||
// Args must be evaluated. Fixed args first.
|
||||
i := 0
|
||||
for ; i < numFixed && i < len(args); i++ {
|
||||
argv[i] = s.evalArg(dot, typ.In(i), args[i])
|
||||
}
|
||||
// Now the ... args.
|
||||
if typ.IsVariadic() {
|
||||
argType := typ.In(typ.NumIn() - 1).Elem() // Argument is a slice.
|
||||
for ; i < len(args); i++ {
|
||||
argv[i] = s.evalArg(dot, argType, args[i])
|
||||
}
|
||||
}
|
||||
// Add final value if necessary.
|
||||
if final.IsValid() {
|
||||
t := typ.In(typ.NumIn() - 1)
|
||||
if typ.IsVariadic() {
|
||||
t = t.Elem()
|
||||
}
|
||||
argv[i] = s.validateType(final, t)
|
||||
}
|
||||
result := fun.Call(argv)
|
||||
// If we have an error that is not nil, stop execution and return that error to the caller.
|
||||
if len(result) == 2 && !result[1].IsNil() {
|
||||
s.at(node)
|
||||
s.errorf("error calling %s: %s", name, result[1].Interface().(error))
|
||||
}
|
||||
return result[0]
|
||||
}
|
||||
|
||||
// canBeNil reports whether an untyped nil can be assigned to the type. See reflect.Zero.
|
||||
func canBeNil(typ reflect.Type) bool {
|
||||
switch typ.Kind() {
|
||||
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// validateType guarantees that the value is valid and assignable to the type.
|
||||
func (s *state) validateType(value reflect.Value, typ reflect.Type) reflect.Value {
|
||||
if !value.IsValid() {
|
||||
if typ == nil || canBeNil(typ) {
|
||||
// An untyped nil interface{}. Accept as a proper nil value.
|
||||
return reflect.Zero(typ)
|
||||
}
|
||||
s.errorf("invalid value; expected %s", typ)
|
||||
}
|
||||
if typ != nil && !value.Type().AssignableTo(typ) {
|
||||
if value.Kind() == reflect.Interface && !value.IsNil() {
|
||||
value = value.Elem()
|
||||
if value.Type().AssignableTo(typ) {
|
||||
return value
|
||||
}
|
||||
// fallthrough
|
||||
}
|
||||
// Does one dereference or indirection work? We could do more, as we
|
||||
// do with method receivers, but that gets messy and method receivers
|
||||
// are much more constrained, so it makes more sense there than here.
|
||||
// Besides, one is almost always all you need.
|
||||
switch {
|
||||
case value.Kind() == reflect.Ptr && value.Type().Elem().AssignableTo(typ):
|
||||
value = value.Elem()
|
||||
if !value.IsValid() {
|
||||
s.errorf("dereference of nil pointer of type %s", typ)
|
||||
}
|
||||
case reflect.PtrTo(value.Type()).AssignableTo(typ) && value.CanAddr():
|
||||
value = value.Addr()
|
||||
default:
|
||||
s.errorf("wrong type for value; expected %s; got %s", typ, value.Type())
|
||||
}
|
||||
}
|
||||
return value
|
||||
}
|
||||
|
||||
func (s *state) evalArg(dot reflect.Value, typ reflect.Type, n parse.Node) reflect.Value {
|
||||
s.at(n)
|
||||
switch arg := n.(type) {
|
||||
case *parse.DotNode:
|
||||
return s.validateType(dot, typ)
|
||||
case *parse.NilNode:
|
||||
if canBeNil(typ) {
|
||||
return reflect.Zero(typ)
|
||||
}
|
||||
s.errorf("cannot assign nil to %s", typ)
|
||||
case *parse.FieldNode:
|
||||
return s.validateType(s.evalFieldNode(dot, arg, []parse.Node{n}, zero), typ)
|
||||
case *parse.VariableNode:
|
||||
return s.validateType(s.evalVariableNode(dot, arg, nil, zero), typ)
|
||||
case *parse.PipeNode:
|
||||
return s.validateType(s.evalPipeline(dot, arg), typ)
|
||||
case *parse.IdentifierNode:
|
||||
return s.evalFunction(dot, arg, arg, nil, zero)
|
||||
case *parse.ChainNode:
|
||||
return s.validateType(s.evalChainNode(dot, arg, nil, zero), typ)
|
||||
}
|
||||
switch typ.Kind() {
|
||||
case reflect.Bool:
|
||||
return s.evalBool(typ, n)
|
||||
case reflect.Complex64, reflect.Complex128:
|
||||
return s.evalComplex(typ, n)
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return s.evalFloat(typ, n)
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return s.evalInteger(typ, n)
|
||||
case reflect.Interface:
|
||||
if typ.NumMethod() == 0 {
|
||||
return s.evalEmptyInterface(dot, n)
|
||||
}
|
||||
case reflect.String:
|
||||
return s.evalString(typ, n)
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
return s.evalUnsignedInteger(typ, n)
|
||||
}
|
||||
s.errorf("can't handle %s for arg of type %s", n, typ)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
func (s *state) evalBool(typ reflect.Type, n parse.Node) reflect.Value {
|
||||
s.at(n)
|
||||
if n, ok := n.(*parse.BoolNode); ok {
|
||||
value := reflect.New(typ).Elem()
|
||||
value.SetBool(n.True)
|
||||
return value
|
||||
}
|
||||
s.errorf("expected bool; found %s", n)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
func (s *state) evalString(typ reflect.Type, n parse.Node) reflect.Value {
|
||||
s.at(n)
|
||||
if n, ok := n.(*parse.StringNode); ok {
|
||||
value := reflect.New(typ).Elem()
|
||||
value.SetString(n.Text)
|
||||
return value
|
||||
}
|
||||
s.errorf("expected string; found %s", n)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
func (s *state) evalInteger(typ reflect.Type, n parse.Node) reflect.Value {
|
||||
s.at(n)
|
||||
if n, ok := n.(*parse.NumberNode); ok && n.IsInt {
|
||||
value := reflect.New(typ).Elem()
|
||||
value.SetInt(n.Int64)
|
||||
return value
|
||||
}
|
||||
s.errorf("expected integer; found %s", n)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
func (s *state) evalUnsignedInteger(typ reflect.Type, n parse.Node) reflect.Value {
|
||||
s.at(n)
|
||||
if n, ok := n.(*parse.NumberNode); ok && n.IsUint {
|
||||
value := reflect.New(typ).Elem()
|
||||
value.SetUint(n.Uint64)
|
||||
return value
|
||||
}
|
||||
s.errorf("expected unsigned integer; found %s", n)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
func (s *state) evalFloat(typ reflect.Type, n parse.Node) reflect.Value {
|
||||
s.at(n)
|
||||
if n, ok := n.(*parse.NumberNode); ok && n.IsFloat {
|
||||
value := reflect.New(typ).Elem()
|
||||
value.SetFloat(n.Float64)
|
||||
return value
|
||||
}
|
||||
s.errorf("expected float; found %s", n)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
func (s *state) evalComplex(typ reflect.Type, n parse.Node) reflect.Value {
|
||||
if n, ok := n.(*parse.NumberNode); ok && n.IsComplex {
|
||||
value := reflect.New(typ).Elem()
|
||||
value.SetComplex(n.Complex128)
|
||||
return value
|
||||
}
|
||||
s.errorf("expected complex; found %s", n)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
func (s *state) evalEmptyInterface(dot reflect.Value, n parse.Node) reflect.Value {
|
||||
s.at(n)
|
||||
switch n := n.(type) {
|
||||
case *parse.BoolNode:
|
||||
return reflect.ValueOf(n.True)
|
||||
case *parse.DotNode:
|
||||
return dot
|
||||
case *parse.FieldNode:
|
||||
return s.evalFieldNode(dot, n, nil, zero)
|
||||
case *parse.IdentifierNode:
|
||||
return s.evalFunction(dot, n, n, nil, zero)
|
||||
case *parse.NilNode:
|
||||
// NilNode is handled in evalArg, the only place that calls here.
|
||||
s.errorf("evalEmptyInterface: nil (can't happen)")
|
||||
case *parse.NumberNode:
|
||||
return s.idealConstant(n)
|
||||
case *parse.StringNode:
|
||||
return reflect.ValueOf(n.Text)
|
||||
case *parse.VariableNode:
|
||||
return s.evalVariableNode(dot, n, nil, zero)
|
||||
case *parse.PipeNode:
|
||||
return s.evalPipeline(dot, n)
|
||||
}
|
||||
s.errorf("can't handle assignment of %s to empty interface argument", n)
|
||||
panic("not reached")
|
||||
}
|
||||
|
||||
// indirect returns the item at the end of indirection, and a bool to indicate if it's nil.
|
||||
// We indirect through pointers and empty interfaces (only) because
|
||||
// non-empty interfaces have methods we might need.
|
||||
func indirect(v reflect.Value) (rv reflect.Value, isNil bool) {
|
||||
for ; v.Kind() == reflect.Ptr || v.Kind() == reflect.Interface; v = v.Elem() {
|
||||
if v.IsNil() {
|
||||
return v, true
|
||||
}
|
||||
if v.Kind() == reflect.Interface && v.NumMethod() > 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
return v, false
|
||||
}
|
||||
|
||||
// printValue writes the textual representation of the value to the output of
|
||||
// the template.
|
||||
func (s *state) printValue(n parse.Node, v reflect.Value) {
|
||||
s.at(n)
|
||||
iface, ok := printableValue(v)
|
||||
if !ok {
|
||||
s.errorf("can't print %s of type %s", n, v.Type())
|
||||
}
|
||||
fmt.Fprint(s.wr, iface)
|
||||
}
|
||||
|
||||
// printableValue returns the, possibly indirected, interface value inside v that
|
||||
// is best for a call to formatted printer.
|
||||
func printableValue(v reflect.Value) (interface{}, bool) {
|
||||
if v.Kind() == reflect.Ptr {
|
||||
v, _ = indirect(v) // fmt.Fprint handles nil.
|
||||
}
|
||||
if !v.IsValid() {
|
||||
return "<no value>", true
|
||||
}
|
||||
|
||||
if !v.Type().Implements(errorType) && !v.Type().Implements(fmtStringerType) {
|
||||
if v.CanAddr() && (reflect.PtrTo(v.Type()).Implements(errorType) || reflect.PtrTo(v.Type()).Implements(fmtStringerType)) {
|
||||
v = v.Addr()
|
||||
} else {
|
||||
switch v.Kind() {
|
||||
case reflect.Chan, reflect.Func:
|
||||
return nil, false
|
||||
}
|
||||
}
|
||||
}
|
||||
return v.Interface(), true
|
||||
}
|
||||
|
||||
// Types to help sort the keys in a map for reproducible output.
|
||||
|
||||
type rvs []reflect.Value
|
||||
|
||||
func (x rvs) Len() int { return len(x) }
|
||||
func (x rvs) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
|
||||
|
||||
type rvInts struct{ rvs }
|
||||
|
||||
func (x rvInts) Less(i, j int) bool { return x.rvs[i].Int() < x.rvs[j].Int() }
|
||||
|
||||
type rvUints struct{ rvs }
|
||||
|
||||
func (x rvUints) Less(i, j int) bool { return x.rvs[i].Uint() < x.rvs[j].Uint() }
|
||||
|
||||
type rvFloats struct{ rvs }
|
||||
|
||||
func (x rvFloats) Less(i, j int) bool { return x.rvs[i].Float() < x.rvs[j].Float() }
|
||||
|
||||
type rvStrings struct{ rvs }
|
||||
|
||||
func (x rvStrings) Less(i, j int) bool { return x.rvs[i].String() < x.rvs[j].String() }
|
||||
|
||||
// sortKeys sorts (if it can) the slice of reflect.Values, which is a slice of map keys.
|
||||
func sortKeys(v []reflect.Value) []reflect.Value {
|
||||
if len(v) <= 1 {
|
||||
return v
|
||||
}
|
||||
switch v[0].Kind() {
|
||||
case reflect.Float32, reflect.Float64:
|
||||
sort.Sort(rvFloats{v})
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
sort.Sort(rvInts{v})
|
||||
case reflect.String:
|
||||
sort.Sort(rvStrings{v})
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
sort.Sort(rvUints{v})
|
||||
}
|
||||
return v
|
||||
}
|
||||
598
vendor/github.com/alecthomas/template/funcs.go
generated
vendored
Normal file
598
vendor/github.com/alecthomas/template/funcs.go
generated
vendored
Normal file
@ -0,0 +1,598 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package template
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/url"
|
||||
"reflect"
|
||||
"strings"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// FuncMap is the type of the map defining the mapping from names to functions.
|
||||
// Each function must have either a single return value, or two return values of
|
||||
// which the second has type error. In that case, if the second (error)
|
||||
// return value evaluates to non-nil during execution, execution terminates and
|
||||
// Execute returns that error.
|
||||
type FuncMap map[string]interface{}
|
||||
|
||||
var builtins = FuncMap{
|
||||
"and": and,
|
||||
"call": call,
|
||||
"html": HTMLEscaper,
|
||||
"index": index,
|
||||
"js": JSEscaper,
|
||||
"len": length,
|
||||
"not": not,
|
||||
"or": or,
|
||||
"print": fmt.Sprint,
|
||||
"printf": fmt.Sprintf,
|
||||
"println": fmt.Sprintln,
|
||||
"urlquery": URLQueryEscaper,
|
||||
|
||||
// Comparisons
|
||||
"eq": eq, // ==
|
||||
"ge": ge, // >=
|
||||
"gt": gt, // >
|
||||
"le": le, // <=
|
||||
"lt": lt, // <
|
||||
"ne": ne, // !=
|
||||
}
|
||||
|
||||
var builtinFuncs = createValueFuncs(builtins)
|
||||
|
||||
// createValueFuncs turns a FuncMap into a map[string]reflect.Value
|
||||
func createValueFuncs(funcMap FuncMap) map[string]reflect.Value {
|
||||
m := make(map[string]reflect.Value)
|
||||
addValueFuncs(m, funcMap)
|
||||
return m
|
||||
}
|
||||
|
||||
// addValueFuncs adds to values the functions in funcs, converting them to reflect.Values.
|
||||
func addValueFuncs(out map[string]reflect.Value, in FuncMap) {
|
||||
for name, fn := range in {
|
||||
v := reflect.ValueOf(fn)
|
||||
if v.Kind() != reflect.Func {
|
||||
panic("value for " + name + " not a function")
|
||||
}
|
||||
if !goodFunc(v.Type()) {
|
||||
panic(fmt.Errorf("can't install method/function %q with %d results", name, v.Type().NumOut()))
|
||||
}
|
||||
out[name] = v
|
||||
}
|
||||
}
|
||||
|
||||
// addFuncs adds to values the functions in funcs. It does no checking of the input -
|
||||
// call addValueFuncs first.
|
||||
func addFuncs(out, in FuncMap) {
|
||||
for name, fn := range in {
|
||||
out[name] = fn
|
||||
}
|
||||
}
|
||||
|
||||
// goodFunc checks that the function or method has the right result signature.
|
||||
func goodFunc(typ reflect.Type) bool {
|
||||
// We allow functions with 1 result or 2 results where the second is an error.
|
||||
switch {
|
||||
case typ.NumOut() == 1:
|
||||
return true
|
||||
case typ.NumOut() == 2 && typ.Out(1) == errorType:
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// findFunction looks for a function in the template, and global map.
|
||||
func findFunction(name string, tmpl *Template) (reflect.Value, bool) {
|
||||
if tmpl != nil && tmpl.common != nil {
|
||||
if fn := tmpl.execFuncs[name]; fn.IsValid() {
|
||||
return fn, true
|
||||
}
|
||||
}
|
||||
if fn := builtinFuncs[name]; fn.IsValid() {
|
||||
return fn, true
|
||||
}
|
||||
return reflect.Value{}, false
|
||||
}
|
||||
|
||||
// Indexing.
|
||||
|
||||
// index returns the result of indexing its first argument by the following
|
||||
// arguments. Thus "index x 1 2 3" is, in Go syntax, x[1][2][3]. Each
|
||||
// indexed item must be a map, slice, or array.
|
||||
func index(item interface{}, indices ...interface{}) (interface{}, error) {
|
||||
v := reflect.ValueOf(item)
|
||||
for _, i := range indices {
|
||||
index := reflect.ValueOf(i)
|
||||
var isNil bool
|
||||
if v, isNil = indirect(v); isNil {
|
||||
return nil, fmt.Errorf("index of nil pointer")
|
||||
}
|
||||
switch v.Kind() {
|
||||
case reflect.Array, reflect.Slice, reflect.String:
|
||||
var x int64
|
||||
switch index.Kind() {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
x = index.Int()
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
x = int64(index.Uint())
|
||||
default:
|
||||
return nil, fmt.Errorf("cannot index slice/array with type %s", index.Type())
|
||||
}
|
||||
if x < 0 || x >= int64(v.Len()) {
|
||||
return nil, fmt.Errorf("index out of range: %d", x)
|
||||
}
|
||||
v = v.Index(int(x))
|
||||
case reflect.Map:
|
||||
if !index.IsValid() {
|
||||
index = reflect.Zero(v.Type().Key())
|
||||
}
|
||||
if !index.Type().AssignableTo(v.Type().Key()) {
|
||||
return nil, fmt.Errorf("%s is not index type for %s", index.Type(), v.Type())
|
||||
}
|
||||
if x := v.MapIndex(index); x.IsValid() {
|
||||
v = x
|
||||
} else {
|
||||
v = reflect.Zero(v.Type().Elem())
|
||||
}
|
||||
default:
|
||||
return nil, fmt.Errorf("can't index item of type %s", v.Type())
|
||||
}
|
||||
}
|
||||
return v.Interface(), nil
|
||||
}
|
||||
|
||||
// Length
|
||||
|
||||
// length returns the length of the item, with an error if it has no defined length.
|
||||
func length(item interface{}) (int, error) {
|
||||
v, isNil := indirect(reflect.ValueOf(item))
|
||||
if isNil {
|
||||
return 0, fmt.Errorf("len of nil pointer")
|
||||
}
|
||||
switch v.Kind() {
|
||||
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String:
|
||||
return v.Len(), nil
|
||||
}
|
||||
return 0, fmt.Errorf("len of type %s", v.Type())
|
||||
}
|
||||
|
||||
// Function invocation
|
||||
|
||||
// call returns the result of evaluating the first argument as a function.
|
||||
// The function must return 1 result, or 2 results, the second of which is an error.
|
||||
func call(fn interface{}, args ...interface{}) (interface{}, error) {
|
||||
v := reflect.ValueOf(fn)
|
||||
typ := v.Type()
|
||||
if typ.Kind() != reflect.Func {
|
||||
return nil, fmt.Errorf("non-function of type %s", typ)
|
||||
}
|
||||
if !goodFunc(typ) {
|
||||
return nil, fmt.Errorf("function called with %d args; should be 1 or 2", typ.NumOut())
|
||||
}
|
||||
numIn := typ.NumIn()
|
||||
var dddType reflect.Type
|
||||
if typ.IsVariadic() {
|
||||
if len(args) < numIn-1 {
|
||||
return nil, fmt.Errorf("wrong number of args: got %d want at least %d", len(args), numIn-1)
|
||||
}
|
||||
dddType = typ.In(numIn - 1).Elem()
|
||||
} else {
|
||||
if len(args) != numIn {
|
||||
return nil, fmt.Errorf("wrong number of args: got %d want %d", len(args), numIn)
|
||||
}
|
||||
}
|
||||
argv := make([]reflect.Value, len(args))
|
||||
for i, arg := range args {
|
||||
value := reflect.ValueOf(arg)
|
||||
// Compute the expected type. Clumsy because of variadics.
|
||||
var argType reflect.Type
|
||||
if !typ.IsVariadic() || i < numIn-1 {
|
||||
argType = typ.In(i)
|
||||
} else {
|
||||
argType = dddType
|
||||
}
|
||||
if !value.IsValid() && canBeNil(argType) {
|
||||
value = reflect.Zero(argType)
|
||||
}
|
||||
if !value.Type().AssignableTo(argType) {
|
||||
return nil, fmt.Errorf("arg %d has type %s; should be %s", i, value.Type(), argType)
|
||||
}
|
||||
argv[i] = value
|
||||
}
|
||||
result := v.Call(argv)
|
||||
if len(result) == 2 && !result[1].IsNil() {
|
||||
return result[0].Interface(), result[1].Interface().(error)
|
||||
}
|
||||
return result[0].Interface(), nil
|
||||
}
|
||||
|
||||
// Boolean logic.
|
||||
|
||||
func truth(a interface{}) bool {
|
||||
t, _ := isTrue(reflect.ValueOf(a))
|
||||
return t
|
||||
}
|
||||
|
||||
// and computes the Boolean AND of its arguments, returning
|
||||
// the first false argument it encounters, or the last argument.
|
||||
func and(arg0 interface{}, args ...interface{}) interface{} {
|
||||
if !truth(arg0) {
|
||||
return arg0
|
||||
}
|
||||
for i := range args {
|
||||
arg0 = args[i]
|
||||
if !truth(arg0) {
|
||||
break
|
||||
}
|
||||
}
|
||||
return arg0
|
||||
}
|
||||
|
||||
// or computes the Boolean OR of its arguments, returning
|
||||
// the first true argument it encounters, or the last argument.
|
||||
func or(arg0 interface{}, args ...interface{}) interface{} {
|
||||
if truth(arg0) {
|
||||
return arg0
|
||||
}
|
||||
for i := range args {
|
||||
arg0 = args[i]
|
||||
if truth(arg0) {
|
||||
break
|
||||
}
|
||||
}
|
||||
return arg0
|
||||
}
|
||||
|
||||
// not returns the Boolean negation of its argument.
|
||||
func not(arg interface{}) (truth bool) {
|
||||
truth, _ = isTrue(reflect.ValueOf(arg))
|
||||
return !truth
|
||||
}
|
||||
|
||||
// Comparison.
|
||||
|
||||
// TODO: Perhaps allow comparison between signed and unsigned integers.
|
||||
|
||||
var (
|
||||
errBadComparisonType = errors.New("invalid type for comparison")
|
||||
errBadComparison = errors.New("incompatible types for comparison")
|
||||
errNoComparison = errors.New("missing argument for comparison")
|
||||
)
|
||||
|
||||
type kind int
|
||||
|
||||
const (
|
||||
invalidKind kind = iota
|
||||
boolKind
|
||||
complexKind
|
||||
intKind
|
||||
floatKind
|
||||
integerKind
|
||||
stringKind
|
||||
uintKind
|
||||
)
|
||||
|
||||
func basicKind(v reflect.Value) (kind, error) {
|
||||
switch v.Kind() {
|
||||
case reflect.Bool:
|
||||
return boolKind, nil
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return intKind, nil
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
return uintKind, nil
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return floatKind, nil
|
||||
case reflect.Complex64, reflect.Complex128:
|
||||
return complexKind, nil
|
||||
case reflect.String:
|
||||
return stringKind, nil
|
||||
}
|
||||
return invalidKind, errBadComparisonType
|
||||
}
|
||||
|
||||
// eq evaluates the comparison a == b || a == c || ...
|
||||
func eq(arg1 interface{}, arg2 ...interface{}) (bool, error) {
|
||||
v1 := reflect.ValueOf(arg1)
|
||||
k1, err := basicKind(v1)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if len(arg2) == 0 {
|
||||
return false, errNoComparison
|
||||
}
|
||||
for _, arg := range arg2 {
|
||||
v2 := reflect.ValueOf(arg)
|
||||
k2, err := basicKind(v2)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
truth := false
|
||||
if k1 != k2 {
|
||||
// Special case: Can compare integer values regardless of type's sign.
|
||||
switch {
|
||||
case k1 == intKind && k2 == uintKind:
|
||||
truth = v1.Int() >= 0 && uint64(v1.Int()) == v2.Uint()
|
||||
case k1 == uintKind && k2 == intKind:
|
||||
truth = v2.Int() >= 0 && v1.Uint() == uint64(v2.Int())
|
||||
default:
|
||||
return false, errBadComparison
|
||||
}
|
||||
} else {
|
||||
switch k1 {
|
||||
case boolKind:
|
||||
truth = v1.Bool() == v2.Bool()
|
||||
case complexKind:
|
||||
truth = v1.Complex() == v2.Complex()
|
||||
case floatKind:
|
||||
truth = v1.Float() == v2.Float()
|
||||
case intKind:
|
||||
truth = v1.Int() == v2.Int()
|
||||
case stringKind:
|
||||
truth = v1.String() == v2.String()
|
||||
case uintKind:
|
||||
truth = v1.Uint() == v2.Uint()
|
||||
default:
|
||||
panic("invalid kind")
|
||||
}
|
||||
}
|
||||
if truth {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// ne evaluates the comparison a != b.
|
||||
func ne(arg1, arg2 interface{}) (bool, error) {
|
||||
// != is the inverse of ==.
|
||||
equal, err := eq(arg1, arg2)
|
||||
return !equal, err
|
||||
}
|
||||
|
||||
// lt evaluates the comparison a < b.
|
||||
func lt(arg1, arg2 interface{}) (bool, error) {
|
||||
v1 := reflect.ValueOf(arg1)
|
||||
k1, err := basicKind(v1)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
v2 := reflect.ValueOf(arg2)
|
||||
k2, err := basicKind(v2)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
truth := false
|
||||
if k1 != k2 {
|
||||
// Special case: Can compare integer values regardless of type's sign.
|
||||
switch {
|
||||
case k1 == intKind && k2 == uintKind:
|
||||
truth = v1.Int() < 0 || uint64(v1.Int()) < v2.Uint()
|
||||
case k1 == uintKind && k2 == intKind:
|
||||
truth = v2.Int() >= 0 && v1.Uint() < uint64(v2.Int())
|
||||
default:
|
||||
return false, errBadComparison
|
||||
}
|
||||
} else {
|
||||
switch k1 {
|
||||
case boolKind, complexKind:
|
||||
return false, errBadComparisonType
|
||||
case floatKind:
|
||||
truth = v1.Float() < v2.Float()
|
||||
case intKind:
|
||||
truth = v1.Int() < v2.Int()
|
||||
case stringKind:
|
||||
truth = v1.String() < v2.String()
|
||||
case uintKind:
|
||||
truth = v1.Uint() < v2.Uint()
|
||||
default:
|
||||
panic("invalid kind")
|
||||
}
|
||||
}
|
||||
return truth, nil
|
||||
}
|
||||
|
||||
// le evaluates the comparison <= b.
|
||||
func le(arg1, arg2 interface{}) (bool, error) {
|
||||
// <= is < or ==.
|
||||
lessThan, err := lt(arg1, arg2)
|
||||
if lessThan || err != nil {
|
||||
return lessThan, err
|
||||
}
|
||||
return eq(arg1, arg2)
|
||||
}
|
||||
|
||||
// gt evaluates the comparison a > b.
|
||||
func gt(arg1, arg2 interface{}) (bool, error) {
|
||||
// > is the inverse of <=.
|
||||
lessOrEqual, err := le(arg1, arg2)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return !lessOrEqual, nil
|
||||
}
|
||||
|
||||
// ge evaluates the comparison a >= b.
|
||||
func ge(arg1, arg2 interface{}) (bool, error) {
|
||||
// >= is the inverse of <.
|
||||
lessThan, err := lt(arg1, arg2)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return !lessThan, nil
|
||||
}
|
||||
|
||||
// HTML escaping.
|
||||
|
||||
var (
|
||||
htmlQuot = []byte(""") // shorter than """
|
||||
htmlApos = []byte("'") // shorter than "'" and apos was not in HTML until HTML5
|
||||
htmlAmp = []byte("&")
|
||||
htmlLt = []byte("<")
|
||||
htmlGt = []byte(">")
|
||||
)
|
||||
|
||||
// HTMLEscape writes to w the escaped HTML equivalent of the plain text data b.
|
||||
func HTMLEscape(w io.Writer, b []byte) {
|
||||
last := 0
|
||||
for i, c := range b {
|
||||
var html []byte
|
||||
switch c {
|
||||
case '"':
|
||||
html = htmlQuot
|
||||
case '\'':
|
||||
html = htmlApos
|
||||
case '&':
|
||||
html = htmlAmp
|
||||
case '<':
|
||||
html = htmlLt
|
||||
case '>':
|
||||
html = htmlGt
|
||||
default:
|
||||
continue
|
||||
}
|
||||
w.Write(b[last:i])
|
||||
w.Write(html)
|
||||
last = i + 1
|
||||
}
|
||||
w.Write(b[last:])
|
||||
}
|
||||
|
||||
// HTMLEscapeString returns the escaped HTML equivalent of the plain text data s.
|
||||
func HTMLEscapeString(s string) string {
|
||||
// Avoid allocation if we can.
|
||||
if strings.IndexAny(s, `'"&<>`) < 0 {
|
||||
return s
|
||||
}
|
||||
var b bytes.Buffer
|
||||
HTMLEscape(&b, []byte(s))
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// HTMLEscaper returns the escaped HTML equivalent of the textual
|
||||
// representation of its arguments.
|
||||
func HTMLEscaper(args ...interface{}) string {
|
||||
return HTMLEscapeString(evalArgs(args))
|
||||
}
|
||||
|
||||
// JavaScript escaping.
|
||||
|
||||
var (
|
||||
jsLowUni = []byte(`\u00`)
|
||||
hex = []byte("0123456789ABCDEF")
|
||||
|
||||
jsBackslash = []byte(`\\`)
|
||||
jsApos = []byte(`\'`)
|
||||
jsQuot = []byte(`\"`)
|
||||
jsLt = []byte(`\x3C`)
|
||||
jsGt = []byte(`\x3E`)
|
||||
)
|
||||
|
||||
// JSEscape writes to w the escaped JavaScript equivalent of the plain text data b.
|
||||
func JSEscape(w io.Writer, b []byte) {
|
||||
last := 0
|
||||
for i := 0; i < len(b); i++ {
|
||||
c := b[i]
|
||||
|
||||
if !jsIsSpecial(rune(c)) {
|
||||
// fast path: nothing to do
|
||||
continue
|
||||
}
|
||||
w.Write(b[last:i])
|
||||
|
||||
if c < utf8.RuneSelf {
|
||||
// Quotes, slashes and angle brackets get quoted.
|
||||
// Control characters get written as \u00XX.
|
||||
switch c {
|
||||
case '\\':
|
||||
w.Write(jsBackslash)
|
||||
case '\'':
|
||||
w.Write(jsApos)
|
||||
case '"':
|
||||
w.Write(jsQuot)
|
||||
case '<':
|
||||
w.Write(jsLt)
|
||||
case '>':
|
||||
w.Write(jsGt)
|
||||
default:
|
||||
w.Write(jsLowUni)
|
||||
t, b := c>>4, c&0x0f
|
||||
w.Write(hex[t : t+1])
|
||||
w.Write(hex[b : b+1])
|
||||
}
|
||||
} else {
|
||||
// Unicode rune.
|
||||
r, size := utf8.DecodeRune(b[i:])
|
||||
if unicode.IsPrint(r) {
|
||||
w.Write(b[i : i+size])
|
||||
} else {
|
||||
fmt.Fprintf(w, "\\u%04X", r)
|
||||
}
|
||||
i += size - 1
|
||||
}
|
||||
last = i + 1
|
||||
}
|
||||
w.Write(b[last:])
|
||||
}
|
||||
|
||||
// JSEscapeString returns the escaped JavaScript equivalent of the plain text data s.
|
||||
func JSEscapeString(s string) string {
|
||||
// Avoid allocation if we can.
|
||||
if strings.IndexFunc(s, jsIsSpecial) < 0 {
|
||||
return s
|
||||
}
|
||||
var b bytes.Buffer
|
||||
JSEscape(&b, []byte(s))
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func jsIsSpecial(r rune) bool {
|
||||
switch r {
|
||||
case '\\', '\'', '"', '<', '>':
|
||||
return true
|
||||
}
|
||||
return r < ' ' || utf8.RuneSelf <= r
|
||||
}
|
||||
|
||||
// JSEscaper returns the escaped JavaScript equivalent of the textual
|
||||
// representation of its arguments.
|
||||
func JSEscaper(args ...interface{}) string {
|
||||
return JSEscapeString(evalArgs(args))
|
||||
}
|
||||
|
||||
// URLQueryEscaper returns the escaped value of the textual representation of
|
||||
// its arguments in a form suitable for embedding in a URL query.
|
||||
func URLQueryEscaper(args ...interface{}) string {
|
||||
return url.QueryEscape(evalArgs(args))
|
||||
}
|
||||
|
||||
// evalArgs formats the list of arguments into a string. It is therefore equivalent to
|
||||
// fmt.Sprint(args...)
|
||||
// except that each argument is indirected (if a pointer), as required,
|
||||
// using the same rules as the default string evaluation during template
|
||||
// execution.
|
||||
func evalArgs(args []interface{}) string {
|
||||
ok := false
|
||||
var s string
|
||||
// Fast path for simple common case.
|
||||
if len(args) == 1 {
|
||||
s, ok = args[0].(string)
|
||||
}
|
||||
if !ok {
|
||||
for i, arg := range args {
|
||||
a, ok := printableValue(reflect.ValueOf(arg))
|
||||
if ok {
|
||||
args[i] = a
|
||||
} // else left fmt do its thing
|
||||
}
|
||||
s = fmt.Sprint(args...)
|
||||
}
|
||||
return s
|
||||
}
|
||||
108
vendor/github.com/alecthomas/template/helper.go
generated
vendored
Normal file
108
vendor/github.com/alecthomas/template/helper.go
generated
vendored
Normal file
@ -0,0 +1,108 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Helper functions to make constructing templates easier.
|
||||
|
||||
package template
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
// Functions and methods to parse templates.
|
||||
|
||||
// Must is a helper that wraps a call to a function returning (*Template, error)
|
||||
// and panics if the error is non-nil. It is intended for use in variable
|
||||
// initializations such as
|
||||
// var t = template.Must(template.New("name").Parse("text"))
|
||||
func Must(t *Template, err error) *Template {
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return t
|
||||
}
|
||||
|
||||
// ParseFiles creates a new Template and parses the template definitions from
|
||||
// the named files. The returned template's name will have the (base) name and
|
||||
// (parsed) contents of the first file. There must be at least one file.
|
||||
// If an error occurs, parsing stops and the returned *Template is nil.
|
||||
func ParseFiles(filenames ...string) (*Template, error) {
|
||||
return parseFiles(nil, filenames...)
|
||||
}
|
||||
|
||||
// ParseFiles parses the named files and associates the resulting templates with
|
||||
// t. If an error occurs, parsing stops and the returned template is nil;
|
||||
// otherwise it is t. There must be at least one file.
|
||||
func (t *Template) ParseFiles(filenames ...string) (*Template, error) {
|
||||
return parseFiles(t, filenames...)
|
||||
}
|
||||
|
||||
// parseFiles is the helper for the method and function. If the argument
|
||||
// template is nil, it is created from the first file.
|
||||
func parseFiles(t *Template, filenames ...string) (*Template, error) {
|
||||
if len(filenames) == 0 {
|
||||
// Not really a problem, but be consistent.
|
||||
return nil, fmt.Errorf("template: no files named in call to ParseFiles")
|
||||
}
|
||||
for _, filename := range filenames {
|
||||
b, err := ioutil.ReadFile(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s := string(b)
|
||||
name := filepath.Base(filename)
|
||||
// First template becomes return value if not already defined,
|
||||
// and we use that one for subsequent New calls to associate
|
||||
// all the templates together. Also, if this file has the same name
|
||||
// as t, this file becomes the contents of t, so
|
||||
// t, err := New(name).Funcs(xxx).ParseFiles(name)
|
||||
// works. Otherwise we create a new template associated with t.
|
||||
var tmpl *Template
|
||||
if t == nil {
|
||||
t = New(name)
|
||||
}
|
||||
if name == t.Name() {
|
||||
tmpl = t
|
||||
} else {
|
||||
tmpl = t.New(name)
|
||||
}
|
||||
_, err = tmpl.Parse(s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// ParseGlob creates a new Template and parses the template definitions from the
|
||||
// files identified by the pattern, which must match at least one file. The
|
||||
// returned template will have the (base) name and (parsed) contents of the
|
||||
// first file matched by the pattern. ParseGlob is equivalent to calling
|
||||
// ParseFiles with the list of files matched by the pattern.
|
||||
func ParseGlob(pattern string) (*Template, error) {
|
||||
return parseGlob(nil, pattern)
|
||||
}
|
||||
|
||||
// ParseGlob parses the template definitions in the files identified by the
|
||||
// pattern and associates the resulting templates with t. The pattern is
|
||||
// processed by filepath.Glob and must match at least one file. ParseGlob is
|
||||
// equivalent to calling t.ParseFiles with the list of files matched by the
|
||||
// pattern.
|
||||
func (t *Template) ParseGlob(pattern string) (*Template, error) {
|
||||
return parseGlob(t, pattern)
|
||||
}
|
||||
|
||||
// parseGlob is the implementation of the function and method ParseGlob.
|
||||
func parseGlob(t *Template, pattern string) (*Template, error) {
|
||||
filenames, err := filepath.Glob(pattern)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(filenames) == 0 {
|
||||
return nil, fmt.Errorf("template: pattern matches no files: %#q", pattern)
|
||||
}
|
||||
return parseFiles(t, filenames...)
|
||||
}
|
||||
556
vendor/github.com/alecthomas/template/parse/lex.go
generated
vendored
Normal file
556
vendor/github.com/alecthomas/template/parse/lex.go
generated
vendored
Normal file
@ -0,0 +1,556 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package parse
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// item represents a token or text string returned from the scanner.
|
||||
type item struct {
|
||||
typ itemType // The type of this item.
|
||||
pos Pos // The starting position, in bytes, of this item in the input string.
|
||||
val string // The value of this item.
|
||||
}
|
||||
|
||||
func (i item) String() string {
|
||||
switch {
|
||||
case i.typ == itemEOF:
|
||||
return "EOF"
|
||||
case i.typ == itemError:
|
||||
return i.val
|
||||
case i.typ > itemKeyword:
|
||||
return fmt.Sprintf("<%s>", i.val)
|
||||
case len(i.val) > 10:
|
||||
return fmt.Sprintf("%.10q...", i.val)
|
||||
}
|
||||
return fmt.Sprintf("%q", i.val)
|
||||
}
|
||||
|
||||
// itemType identifies the type of lex items.
|
||||
type itemType int
|
||||
|
||||
const (
|
||||
itemError itemType = iota // error occurred; value is text of error
|
||||
itemBool // boolean constant
|
||||
itemChar // printable ASCII character; grab bag for comma etc.
|
||||
itemCharConstant // character constant
|
||||
itemComplex // complex constant (1+2i); imaginary is just a number
|
||||
itemColonEquals // colon-equals (':=') introducing a declaration
|
||||
itemEOF
|
||||
itemField // alphanumeric identifier starting with '.'
|
||||
itemIdentifier // alphanumeric identifier not starting with '.'
|
||||
itemLeftDelim // left action delimiter
|
||||
itemLeftParen // '(' inside action
|
||||
itemNumber // simple number, including imaginary
|
||||
itemPipe // pipe symbol
|
||||
itemRawString // raw quoted string (includes quotes)
|
||||
itemRightDelim // right action delimiter
|
||||
itemElideNewline // elide newline after right delim
|
||||
itemRightParen // ')' inside action
|
||||
itemSpace // run of spaces separating arguments
|
||||
itemString // quoted string (includes quotes)
|
||||
itemText // plain text
|
||||
itemVariable // variable starting with '$', such as '$' or '$1' or '$hello'
|
||||
// Keywords appear after all the rest.
|
||||
itemKeyword // used only to delimit the keywords
|
||||
itemDot // the cursor, spelled '.'
|
||||
itemDefine // define keyword
|
||||
itemElse // else keyword
|
||||
itemEnd // end keyword
|
||||
itemIf // if keyword
|
||||
itemNil // the untyped nil constant, easiest to treat as a keyword
|
||||
itemRange // range keyword
|
||||
itemTemplate // template keyword
|
||||
itemWith // with keyword
|
||||
)
|
||||
|
||||
var key = map[string]itemType{
|
||||
".": itemDot,
|
||||
"define": itemDefine,
|
||||
"else": itemElse,
|
||||
"end": itemEnd,
|
||||
"if": itemIf,
|
||||
"range": itemRange,
|
||||
"nil": itemNil,
|
||||
"template": itemTemplate,
|
||||
"with": itemWith,
|
||||
}
|
||||
|
||||
const eof = -1
|
||||
|
||||
// stateFn represents the state of the scanner as a function that returns the next state.
|
||||
type stateFn func(*lexer) stateFn
|
||||
|
||||
// lexer holds the state of the scanner.
|
||||
type lexer struct {
|
||||
name string // the name of the input; used only for error reports
|
||||
input string // the string being scanned
|
||||
leftDelim string // start of action
|
||||
rightDelim string // end of action
|
||||
state stateFn // the next lexing function to enter
|
||||
pos Pos // current position in the input
|
||||
start Pos // start position of this item
|
||||
width Pos // width of last rune read from input
|
||||
lastPos Pos // position of most recent item returned by nextItem
|
||||
items chan item // channel of scanned items
|
||||
parenDepth int // nesting depth of ( ) exprs
|
||||
}
|
||||
|
||||
// next returns the next rune in the input.
|
||||
func (l *lexer) next() rune {
|
||||
if int(l.pos) >= len(l.input) {
|
||||
l.width = 0
|
||||
return eof
|
||||
}
|
||||
r, w := utf8.DecodeRuneInString(l.input[l.pos:])
|
||||
l.width = Pos(w)
|
||||
l.pos += l.width
|
||||
return r
|
||||
}
|
||||
|
||||
// peek returns but does not consume the next rune in the input.
|
||||
func (l *lexer) peek() rune {
|
||||
r := l.next()
|
||||
l.backup()
|
||||
return r
|
||||
}
|
||||
|
||||
// backup steps back one rune. Can only be called once per call of next.
|
||||
func (l *lexer) backup() {
|
||||
l.pos -= l.width
|
||||
}
|
||||
|
||||
// emit passes an item back to the client.
|
||||
func (l *lexer) emit(t itemType) {
|
||||
l.items <- item{t, l.start, l.input[l.start:l.pos]}
|
||||
l.start = l.pos
|
||||
}
|
||||
|
||||
// ignore skips over the pending input before this point.
|
||||
func (l *lexer) ignore() {
|
||||
l.start = l.pos
|
||||
}
|
||||
|
||||
// accept consumes the next rune if it's from the valid set.
|
||||
func (l *lexer) accept(valid string) bool {
|
||||
if strings.IndexRune(valid, l.next()) >= 0 {
|
||||
return true
|
||||
}
|
||||
l.backup()
|
||||
return false
|
||||
}
|
||||
|
||||
// acceptRun consumes a run of runes from the valid set.
|
||||
func (l *lexer) acceptRun(valid string) {
|
||||
for strings.IndexRune(valid, l.next()) >= 0 {
|
||||
}
|
||||
l.backup()
|
||||
}
|
||||
|
||||
// lineNumber reports which line we're on, based on the position of
|
||||
// the previous item returned by nextItem. Doing it this way
|
||||
// means we don't have to worry about peek double counting.
|
||||
func (l *lexer) lineNumber() int {
|
||||
return 1 + strings.Count(l.input[:l.lastPos], "\n")
|
||||
}
|
||||
|
||||
// errorf returns an error token and terminates the scan by passing
|
||||
// back a nil pointer that will be the next state, terminating l.nextItem.
|
||||
func (l *lexer) errorf(format string, args ...interface{}) stateFn {
|
||||
l.items <- item{itemError, l.start, fmt.Sprintf(format, args...)}
|
||||
return nil
|
||||
}
|
||||
|
||||
// nextItem returns the next item from the input.
|
||||
func (l *lexer) nextItem() item {
|
||||
item := <-l.items
|
||||
l.lastPos = item.pos
|
||||
return item
|
||||
}
|
||||
|
||||
// lex creates a new scanner for the input string.
|
||||
func lex(name, input, left, right string) *lexer {
|
||||
if left == "" {
|
||||
left = leftDelim
|
||||
}
|
||||
if right == "" {
|
||||
right = rightDelim
|
||||
}
|
||||
l := &lexer{
|
||||
name: name,
|
||||
input: input,
|
||||
leftDelim: left,
|
||||
rightDelim: right,
|
||||
items: make(chan item),
|
||||
}
|
||||
go l.run()
|
||||
return l
|
||||
}
|
||||
|
||||
// run runs the state machine for the lexer.
|
||||
func (l *lexer) run() {
|
||||
for l.state = lexText; l.state != nil; {
|
||||
l.state = l.state(l)
|
||||
}
|
||||
}
|
||||
|
||||
// state functions
|
||||
|
||||
const (
|
||||
leftDelim = "{{"
|
||||
rightDelim = "}}"
|
||||
leftComment = "/*"
|
||||
rightComment = "*/"
|
||||
)
|
||||
|
||||
// lexText scans until an opening action delimiter, "{{".
|
||||
func lexText(l *lexer) stateFn {
|
||||
for {
|
||||
if strings.HasPrefix(l.input[l.pos:], l.leftDelim) {
|
||||
if l.pos > l.start {
|
||||
l.emit(itemText)
|
||||
}
|
||||
return lexLeftDelim
|
||||
}
|
||||
if l.next() == eof {
|
||||
break
|
||||
}
|
||||
}
|
||||
// Correctly reached EOF.
|
||||
if l.pos > l.start {
|
||||
l.emit(itemText)
|
||||
}
|
||||
l.emit(itemEOF)
|
||||
return nil
|
||||
}
|
||||
|
||||
// lexLeftDelim scans the left delimiter, which is known to be present.
|
||||
func lexLeftDelim(l *lexer) stateFn {
|
||||
l.pos += Pos(len(l.leftDelim))
|
||||
if strings.HasPrefix(l.input[l.pos:], leftComment) {
|
||||
return lexComment
|
||||
}
|
||||
l.emit(itemLeftDelim)
|
||||
l.parenDepth = 0
|
||||
return lexInsideAction
|
||||
}
|
||||
|
||||
// lexComment scans a comment. The left comment marker is known to be present.
|
||||
func lexComment(l *lexer) stateFn {
|
||||
l.pos += Pos(len(leftComment))
|
||||
i := strings.Index(l.input[l.pos:], rightComment)
|
||||
if i < 0 {
|
||||
return l.errorf("unclosed comment")
|
||||
}
|
||||
l.pos += Pos(i + len(rightComment))
|
||||
if !strings.HasPrefix(l.input[l.pos:], l.rightDelim) {
|
||||
return l.errorf("comment ends before closing delimiter")
|
||||
|
||||
}
|
||||
l.pos += Pos(len(l.rightDelim))
|
||||
l.ignore()
|
||||
return lexText
|
||||
}
|
||||
|
||||
// lexRightDelim scans the right delimiter, which is known to be present.
|
||||
func lexRightDelim(l *lexer) stateFn {
|
||||
l.pos += Pos(len(l.rightDelim))
|
||||
l.emit(itemRightDelim)
|
||||
if l.peek() == '\\' {
|
||||
l.pos++
|
||||
l.emit(itemElideNewline)
|
||||
}
|
||||
return lexText
|
||||
}
|
||||
|
||||
// lexInsideAction scans the elements inside action delimiters.
|
||||
func lexInsideAction(l *lexer) stateFn {
|
||||
// Either number, quoted string, or identifier.
|
||||
// Spaces separate arguments; runs of spaces turn into itemSpace.
|
||||
// Pipe symbols separate and are emitted.
|
||||
if strings.HasPrefix(l.input[l.pos:], l.rightDelim+"\\") || strings.HasPrefix(l.input[l.pos:], l.rightDelim) {
|
||||
if l.parenDepth == 0 {
|
||||
return lexRightDelim
|
||||
}
|
||||
return l.errorf("unclosed left paren")
|
||||
}
|
||||
switch r := l.next(); {
|
||||
case r == eof || isEndOfLine(r):
|
||||
return l.errorf("unclosed action")
|
||||
case isSpace(r):
|
||||
return lexSpace
|
||||
case r == ':':
|
||||
if l.next() != '=' {
|
||||
return l.errorf("expected :=")
|
||||
}
|
||||
l.emit(itemColonEquals)
|
||||
case r == '|':
|
||||
l.emit(itemPipe)
|
||||
case r == '"':
|
||||
return lexQuote
|
||||
case r == '`':
|
||||
return lexRawQuote
|
||||
case r == '$':
|
||||
return lexVariable
|
||||
case r == '\'':
|
||||
return lexChar
|
||||
case r == '.':
|
||||
// special look-ahead for ".field" so we don't break l.backup().
|
||||
if l.pos < Pos(len(l.input)) {
|
||||
r := l.input[l.pos]
|
||||
if r < '0' || '9' < r {
|
||||
return lexField
|
||||
}
|
||||
}
|
||||
fallthrough // '.' can start a number.
|
||||
case r == '+' || r == '-' || ('0' <= r && r <= '9'):
|
||||
l.backup()
|
||||
return lexNumber
|
||||
case isAlphaNumeric(r):
|
||||
l.backup()
|
||||
return lexIdentifier
|
||||
case r == '(':
|
||||
l.emit(itemLeftParen)
|
||||
l.parenDepth++
|
||||
return lexInsideAction
|
||||
case r == ')':
|
||||
l.emit(itemRightParen)
|
||||
l.parenDepth--
|
||||
if l.parenDepth < 0 {
|
||||
return l.errorf("unexpected right paren %#U", r)
|
||||
}
|
||||
return lexInsideAction
|
||||
case r <= unicode.MaxASCII && unicode.IsPrint(r):
|
||||
l.emit(itemChar)
|
||||
return lexInsideAction
|
||||
default:
|
||||
return l.errorf("unrecognized character in action: %#U", r)
|
||||
}
|
||||
return lexInsideAction
|
||||
}
|
||||
|
||||
// lexSpace scans a run of space characters.
|
||||
// One space has already been seen.
|
||||
func lexSpace(l *lexer) stateFn {
|
||||
for isSpace(l.peek()) {
|
||||
l.next()
|
||||
}
|
||||
l.emit(itemSpace)
|
||||
return lexInsideAction
|
||||
}
|
||||
|
||||
// lexIdentifier scans an alphanumeric.
|
||||
func lexIdentifier(l *lexer) stateFn {
|
||||
Loop:
|
||||
for {
|
||||
switch r := l.next(); {
|
||||
case isAlphaNumeric(r):
|
||||
// absorb.
|
||||
default:
|
||||
l.backup()
|
||||
word := l.input[l.start:l.pos]
|
||||
if !l.atTerminator() {
|
||||
return l.errorf("bad character %#U", r)
|
||||
}
|
||||
switch {
|
||||
case key[word] > itemKeyword:
|
||||
l.emit(key[word])
|
||||
case word[0] == '.':
|
||||
l.emit(itemField)
|
||||
case word == "true", word == "false":
|
||||
l.emit(itemBool)
|
||||
default:
|
||||
l.emit(itemIdentifier)
|
||||
}
|
||||
break Loop
|
||||
}
|
||||
}
|
||||
return lexInsideAction
|
||||
}
|
||||
|
||||
// lexField scans a field: .Alphanumeric.
|
||||
// The . has been scanned.
|
||||
func lexField(l *lexer) stateFn {
|
||||
return lexFieldOrVariable(l, itemField)
|
||||
}
|
||||
|
||||
// lexVariable scans a Variable: $Alphanumeric.
|
||||
// The $ has been scanned.
|
||||
func lexVariable(l *lexer) stateFn {
|
||||
if l.atTerminator() { // Nothing interesting follows -> "$".
|
||||
l.emit(itemVariable)
|
||||
return lexInsideAction
|
||||
}
|
||||
return lexFieldOrVariable(l, itemVariable)
|
||||
}
|
||||
|
||||
// lexVariable scans a field or variable: [.$]Alphanumeric.
|
||||
// The . or $ has been scanned.
|
||||
func lexFieldOrVariable(l *lexer, typ itemType) stateFn {
|
||||
if l.atTerminator() { // Nothing interesting follows -> "." or "$".
|
||||
if typ == itemVariable {
|
||||
l.emit(itemVariable)
|
||||
} else {
|
||||
l.emit(itemDot)
|
||||
}
|
||||
return lexInsideAction
|
||||
}
|
||||
var r rune
|
||||
for {
|
||||
r = l.next()
|
||||
if !isAlphaNumeric(r) {
|
||||
l.backup()
|
||||
break
|
||||
}
|
||||
}
|
||||
if !l.atTerminator() {
|
||||
return l.errorf("bad character %#U", r)
|
||||
}
|
||||
l.emit(typ)
|
||||
return lexInsideAction
|
||||
}
|
||||
|
||||
// atTerminator reports whether the input is at valid termination character to
|
||||
// appear after an identifier. Breaks .X.Y into two pieces. Also catches cases
|
||||
// like "$x+2" not being acceptable without a space, in case we decide one
|
||||
// day to implement arithmetic.
|
||||
func (l *lexer) atTerminator() bool {
|
||||
r := l.peek()
|
||||
if isSpace(r) || isEndOfLine(r) {
|
||||
return true
|
||||
}
|
||||
switch r {
|
||||
case eof, '.', ',', '|', ':', ')', '(':
|
||||
return true
|
||||
}
|
||||
// Does r start the delimiter? This can be ambiguous (with delim=="//", $x/2 will
|
||||
// succeed but should fail) but only in extremely rare cases caused by willfully
|
||||
// bad choice of delimiter.
|
||||
if rd, _ := utf8.DecodeRuneInString(l.rightDelim); rd == r {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// lexChar scans a character constant. The initial quote is already
|
||||
// scanned. Syntax checking is done by the parser.
|
||||
func lexChar(l *lexer) stateFn {
|
||||
Loop:
|
||||
for {
|
||||
switch l.next() {
|
||||
case '\\':
|
||||
if r := l.next(); r != eof && r != '\n' {
|
||||
break
|
||||
}
|
||||
fallthrough
|
||||
case eof, '\n':
|
||||
return l.errorf("unterminated character constant")
|
||||
case '\'':
|
||||
break Loop
|
||||
}
|
||||
}
|
||||
l.emit(itemCharConstant)
|
||||
return lexInsideAction
|
||||
}
|
||||
|
||||
// lexNumber scans a number: decimal, octal, hex, float, or imaginary. This
|
||||
// isn't a perfect number scanner - for instance it accepts "." and "0x0.2"
|
||||
// and "089" - but when it's wrong the input is invalid and the parser (via
|
||||
// strconv) will notice.
|
||||
func lexNumber(l *lexer) stateFn {
|
||||
if !l.scanNumber() {
|
||||
return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
|
||||
}
|
||||
if sign := l.peek(); sign == '+' || sign == '-' {
|
||||
// Complex: 1+2i. No spaces, must end in 'i'.
|
||||
if !l.scanNumber() || l.input[l.pos-1] != 'i' {
|
||||
return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
|
||||
}
|
||||
l.emit(itemComplex)
|
||||
} else {
|
||||
l.emit(itemNumber)
|
||||
}
|
||||
return lexInsideAction
|
||||
}
|
||||
|
||||
func (l *lexer) scanNumber() bool {
|
||||
// Optional leading sign.
|
||||
l.accept("+-")
|
||||
// Is it hex?
|
||||
digits := "0123456789"
|
||||
if l.accept("0") && l.accept("xX") {
|
||||
digits = "0123456789abcdefABCDEF"
|
||||
}
|
||||
l.acceptRun(digits)
|
||||
if l.accept(".") {
|
||||
l.acceptRun(digits)
|
||||
}
|
||||
if l.accept("eE") {
|
||||
l.accept("+-")
|
||||
l.acceptRun("0123456789")
|
||||
}
|
||||
// Is it imaginary?
|
||||
l.accept("i")
|
||||
// Next thing mustn't be alphanumeric.
|
||||
if isAlphaNumeric(l.peek()) {
|
||||
l.next()
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// lexQuote scans a quoted string.
|
||||
func lexQuote(l *lexer) stateFn {
|
||||
Loop:
|
||||
for {
|
||||
switch l.next() {
|
||||
case '\\':
|
||||
if r := l.next(); r != eof && r != '\n' {
|
||||
break
|
||||
}
|
||||
fallthrough
|
||||
case eof, '\n':
|
||||
return l.errorf("unterminated quoted string")
|
||||
case '"':
|
||||
break Loop
|
||||
}
|
||||
}
|
||||
l.emit(itemString)
|
||||
return lexInsideAction
|
||||
}
|
||||
|
||||
// lexRawQuote scans a raw quoted string.
|
||||
func lexRawQuote(l *lexer) stateFn {
|
||||
Loop:
|
||||
for {
|
||||
switch l.next() {
|
||||
case eof, '\n':
|
||||
return l.errorf("unterminated raw quoted string")
|
||||
case '`':
|
||||
break Loop
|
||||
}
|
||||
}
|
||||
l.emit(itemRawString)
|
||||
return lexInsideAction
|
||||
}
|
||||
|
||||
// isSpace reports whether r is a space character.
|
||||
func isSpace(r rune) bool {
|
||||
return r == ' ' || r == '\t'
|
||||
}
|
||||
|
||||
// isEndOfLine reports whether r is an end-of-line character.
|
||||
func isEndOfLine(r rune) bool {
|
||||
return r == '\r' || r == '\n'
|
||||
}
|
||||
|
||||
// isAlphaNumeric reports whether r is an alphabetic, digit, or underscore.
|
||||
func isAlphaNumeric(r rune) bool {
|
||||
return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r)
|
||||
}
|
||||
834
vendor/github.com/alecthomas/template/parse/node.go
generated
vendored
Normal file
834
vendor/github.com/alecthomas/template/parse/node.go
generated
vendored
Normal file
@ -0,0 +1,834 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Parse nodes.
|
||||
|
||||
package parse
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var textFormat = "%s" // Changed to "%q" in tests for better error messages.
|
||||
|
||||
// A Node is an element in the parse tree. The interface is trivial.
|
||||
// The interface contains an unexported method so that only
|
||||
// types local to this package can satisfy it.
|
||||
type Node interface {
|
||||
Type() NodeType
|
||||
String() string
|
||||
// Copy does a deep copy of the Node and all its components.
|
||||
// To avoid type assertions, some XxxNodes also have specialized
|
||||
// CopyXxx methods that return *XxxNode.
|
||||
Copy() Node
|
||||
Position() Pos // byte position of start of node in full original input string
|
||||
// tree returns the containing *Tree.
|
||||
// It is unexported so all implementations of Node are in this package.
|
||||
tree() *Tree
|
||||
}
|
||||
|
||||
// NodeType identifies the type of a parse tree node.
|
||||
type NodeType int
|
||||
|
||||
// Pos represents a byte position in the original input text from which
|
||||
// this template was parsed.
|
||||
type Pos int
|
||||
|
||||
func (p Pos) Position() Pos {
|
||||
return p
|
||||
}
|
||||
|
||||
// Type returns itself and provides an easy default implementation
|
||||
// for embedding in a Node. Embedded in all non-trivial Nodes.
|
||||
func (t NodeType) Type() NodeType {
|
||||
return t
|
||||
}
|
||||
|
||||
const (
|
||||
NodeText NodeType = iota // Plain text.
|
||||
NodeAction // A non-control action such as a field evaluation.
|
||||
NodeBool // A boolean constant.
|
||||
NodeChain // A sequence of field accesses.
|
||||
NodeCommand // An element of a pipeline.
|
||||
NodeDot // The cursor, dot.
|
||||
nodeElse // An else action. Not added to tree.
|
||||
nodeEnd // An end action. Not added to tree.
|
||||
NodeField // A field or method name.
|
||||
NodeIdentifier // An identifier; always a function name.
|
||||
NodeIf // An if action.
|
||||
NodeList // A list of Nodes.
|
||||
NodeNil // An untyped nil constant.
|
||||
NodeNumber // A numerical constant.
|
||||
NodePipe // A pipeline of commands.
|
||||
NodeRange // A range action.
|
||||
NodeString // A string constant.
|
||||
NodeTemplate // A template invocation action.
|
||||
NodeVariable // A $ variable.
|
||||
NodeWith // A with action.
|
||||
)
|
||||
|
||||
// Nodes.
|
||||
|
||||
// ListNode holds a sequence of nodes.
|
||||
type ListNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Nodes []Node // The element nodes in lexical order.
|
||||
}
|
||||
|
||||
func (t *Tree) newList(pos Pos) *ListNode {
|
||||
return &ListNode{tr: t, NodeType: NodeList, Pos: pos}
|
||||
}
|
||||
|
||||
func (l *ListNode) append(n Node) {
|
||||
l.Nodes = append(l.Nodes, n)
|
||||
}
|
||||
|
||||
func (l *ListNode) tree() *Tree {
|
||||
return l.tr
|
||||
}
|
||||
|
||||
func (l *ListNode) String() string {
|
||||
b := new(bytes.Buffer)
|
||||
for _, n := range l.Nodes {
|
||||
fmt.Fprint(b, n)
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func (l *ListNode) CopyList() *ListNode {
|
||||
if l == nil {
|
||||
return l
|
||||
}
|
||||
n := l.tr.newList(l.Pos)
|
||||
for _, elem := range l.Nodes {
|
||||
n.append(elem.Copy())
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (l *ListNode) Copy() Node {
|
||||
return l.CopyList()
|
||||
}
|
||||
|
||||
// TextNode holds plain text.
|
||||
type TextNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Text []byte // The text; may span newlines.
|
||||
}
|
||||
|
||||
func (t *Tree) newText(pos Pos, text string) *TextNode {
|
||||
return &TextNode{tr: t, NodeType: NodeText, Pos: pos, Text: []byte(text)}
|
||||
}
|
||||
|
||||
func (t *TextNode) String() string {
|
||||
return fmt.Sprintf(textFormat, t.Text)
|
||||
}
|
||||
|
||||
func (t *TextNode) tree() *Tree {
|
||||
return t.tr
|
||||
}
|
||||
|
||||
func (t *TextNode) Copy() Node {
|
||||
return &TextNode{tr: t.tr, NodeType: NodeText, Pos: t.Pos, Text: append([]byte{}, t.Text...)}
|
||||
}
|
||||
|
||||
// PipeNode holds a pipeline with optional declaration
|
||||
type PipeNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Line int // The line number in the input (deprecated; kept for compatibility)
|
||||
Decl []*VariableNode // Variable declarations in lexical order.
|
||||
Cmds []*CommandNode // The commands in lexical order.
|
||||
}
|
||||
|
||||
func (t *Tree) newPipeline(pos Pos, line int, decl []*VariableNode) *PipeNode {
|
||||
return &PipeNode{tr: t, NodeType: NodePipe, Pos: pos, Line: line, Decl: decl}
|
||||
}
|
||||
|
||||
func (p *PipeNode) append(command *CommandNode) {
|
||||
p.Cmds = append(p.Cmds, command)
|
||||
}
|
||||
|
||||
func (p *PipeNode) String() string {
|
||||
s := ""
|
||||
if len(p.Decl) > 0 {
|
||||
for i, v := range p.Decl {
|
||||
if i > 0 {
|
||||
s += ", "
|
||||
}
|
||||
s += v.String()
|
||||
}
|
||||
s += " := "
|
||||
}
|
||||
for i, c := range p.Cmds {
|
||||
if i > 0 {
|
||||
s += " | "
|
||||
}
|
||||
s += c.String()
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (p *PipeNode) tree() *Tree {
|
||||
return p.tr
|
||||
}
|
||||
|
||||
func (p *PipeNode) CopyPipe() *PipeNode {
|
||||
if p == nil {
|
||||
return p
|
||||
}
|
||||
var decl []*VariableNode
|
||||
for _, d := range p.Decl {
|
||||
decl = append(decl, d.Copy().(*VariableNode))
|
||||
}
|
||||
n := p.tr.newPipeline(p.Pos, p.Line, decl)
|
||||
for _, c := range p.Cmds {
|
||||
n.append(c.Copy().(*CommandNode))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (p *PipeNode) Copy() Node {
|
||||
return p.CopyPipe()
|
||||
}
|
||||
|
||||
// ActionNode holds an action (something bounded by delimiters).
|
||||
// Control actions have their own nodes; ActionNode represents simple
|
||||
// ones such as field evaluations and parenthesized pipelines.
|
||||
type ActionNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Line int // The line number in the input (deprecated; kept for compatibility)
|
||||
Pipe *PipeNode // The pipeline in the action.
|
||||
}
|
||||
|
||||
func (t *Tree) newAction(pos Pos, line int, pipe *PipeNode) *ActionNode {
|
||||
return &ActionNode{tr: t, NodeType: NodeAction, Pos: pos, Line: line, Pipe: pipe}
|
||||
}
|
||||
|
||||
func (a *ActionNode) String() string {
|
||||
return fmt.Sprintf("{{%s}}", a.Pipe)
|
||||
|
||||
}
|
||||
|
||||
func (a *ActionNode) tree() *Tree {
|
||||
return a.tr
|
||||
}
|
||||
|
||||
func (a *ActionNode) Copy() Node {
|
||||
return a.tr.newAction(a.Pos, a.Line, a.Pipe.CopyPipe())
|
||||
|
||||
}
|
||||
|
||||
// CommandNode holds a command (a pipeline inside an evaluating action).
|
||||
type CommandNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Args []Node // Arguments in lexical order: Identifier, field, or constant.
|
||||
}
|
||||
|
||||
func (t *Tree) newCommand(pos Pos) *CommandNode {
|
||||
return &CommandNode{tr: t, NodeType: NodeCommand, Pos: pos}
|
||||
}
|
||||
|
||||
func (c *CommandNode) append(arg Node) {
|
||||
c.Args = append(c.Args, arg)
|
||||
}
|
||||
|
||||
func (c *CommandNode) String() string {
|
||||
s := ""
|
||||
for i, arg := range c.Args {
|
||||
if i > 0 {
|
||||
s += " "
|
||||
}
|
||||
if arg, ok := arg.(*PipeNode); ok {
|
||||
s += "(" + arg.String() + ")"
|
||||
continue
|
||||
}
|
||||
s += arg.String()
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (c *CommandNode) tree() *Tree {
|
||||
return c.tr
|
||||
}
|
||||
|
||||
func (c *CommandNode) Copy() Node {
|
||||
if c == nil {
|
||||
return c
|
||||
}
|
||||
n := c.tr.newCommand(c.Pos)
|
||||
for _, c := range c.Args {
|
||||
n.append(c.Copy())
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// IdentifierNode holds an identifier.
|
||||
type IdentifierNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Ident string // The identifier's name.
|
||||
}
|
||||
|
||||
// NewIdentifier returns a new IdentifierNode with the given identifier name.
|
||||
func NewIdentifier(ident string) *IdentifierNode {
|
||||
return &IdentifierNode{NodeType: NodeIdentifier, Ident: ident}
|
||||
}
|
||||
|
||||
// SetPos sets the position. NewIdentifier is a public method so we can't modify its signature.
|
||||
// Chained for convenience.
|
||||
// TODO: fix one day?
|
||||
func (i *IdentifierNode) SetPos(pos Pos) *IdentifierNode {
|
||||
i.Pos = pos
|
||||
return i
|
||||
}
|
||||
|
||||
// SetTree sets the parent tree for the node. NewIdentifier is a public method so we can't modify its signature.
|
||||
// Chained for convenience.
|
||||
// TODO: fix one day?
|
||||
func (i *IdentifierNode) SetTree(t *Tree) *IdentifierNode {
|
||||
i.tr = t
|
||||
return i
|
||||
}
|
||||
|
||||
func (i *IdentifierNode) String() string {
|
||||
return i.Ident
|
||||
}
|
||||
|
||||
func (i *IdentifierNode) tree() *Tree {
|
||||
return i.tr
|
||||
}
|
||||
|
||||
func (i *IdentifierNode) Copy() Node {
|
||||
return NewIdentifier(i.Ident).SetTree(i.tr).SetPos(i.Pos)
|
||||
}
|
||||
|
||||
// VariableNode holds a list of variable names, possibly with chained field
|
||||
// accesses. The dollar sign is part of the (first) name.
|
||||
type VariableNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Ident []string // Variable name and fields in lexical order.
|
||||
}
|
||||
|
||||
func (t *Tree) newVariable(pos Pos, ident string) *VariableNode {
|
||||
return &VariableNode{tr: t, NodeType: NodeVariable, Pos: pos, Ident: strings.Split(ident, ".")}
|
||||
}
|
||||
|
||||
func (v *VariableNode) String() string {
|
||||
s := ""
|
||||
for i, id := range v.Ident {
|
||||
if i > 0 {
|
||||
s += "."
|
||||
}
|
||||
s += id
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (v *VariableNode) tree() *Tree {
|
||||
return v.tr
|
||||
}
|
||||
|
||||
func (v *VariableNode) Copy() Node {
|
||||
return &VariableNode{tr: v.tr, NodeType: NodeVariable, Pos: v.Pos, Ident: append([]string{}, v.Ident...)}
|
||||
}
|
||||
|
||||
// DotNode holds the special identifier '.'.
|
||||
type DotNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
}
|
||||
|
||||
func (t *Tree) newDot(pos Pos) *DotNode {
|
||||
return &DotNode{tr: t, NodeType: NodeDot, Pos: pos}
|
||||
}
|
||||
|
||||
func (d *DotNode) Type() NodeType {
|
||||
// Override method on embedded NodeType for API compatibility.
|
||||
// TODO: Not really a problem; could change API without effect but
|
||||
// api tool complains.
|
||||
return NodeDot
|
||||
}
|
||||
|
||||
func (d *DotNode) String() string {
|
||||
return "."
|
||||
}
|
||||
|
||||
func (d *DotNode) tree() *Tree {
|
||||
return d.tr
|
||||
}
|
||||
|
||||
func (d *DotNode) Copy() Node {
|
||||
return d.tr.newDot(d.Pos)
|
||||
}
|
||||
|
||||
// NilNode holds the special identifier 'nil' representing an untyped nil constant.
|
||||
type NilNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
}
|
||||
|
||||
func (t *Tree) newNil(pos Pos) *NilNode {
|
||||
return &NilNode{tr: t, NodeType: NodeNil, Pos: pos}
|
||||
}
|
||||
|
||||
func (n *NilNode) Type() NodeType {
|
||||
// Override method on embedded NodeType for API compatibility.
|
||||
// TODO: Not really a problem; could change API without effect but
|
||||
// api tool complains.
|
||||
return NodeNil
|
||||
}
|
||||
|
||||
func (n *NilNode) String() string {
|
||||
return "nil"
|
||||
}
|
||||
|
||||
func (n *NilNode) tree() *Tree {
|
||||
return n.tr
|
||||
}
|
||||
|
||||
func (n *NilNode) Copy() Node {
|
||||
return n.tr.newNil(n.Pos)
|
||||
}
|
||||
|
||||
// FieldNode holds a field (identifier starting with '.').
|
||||
// The names may be chained ('.x.y').
|
||||
// The period is dropped from each ident.
|
||||
type FieldNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Ident []string // The identifiers in lexical order.
|
||||
}
|
||||
|
||||
func (t *Tree) newField(pos Pos, ident string) *FieldNode {
|
||||
return &FieldNode{tr: t, NodeType: NodeField, Pos: pos, Ident: strings.Split(ident[1:], ".")} // [1:] to drop leading period
|
||||
}
|
||||
|
||||
func (f *FieldNode) String() string {
|
||||
s := ""
|
||||
for _, id := range f.Ident {
|
||||
s += "." + id
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (f *FieldNode) tree() *Tree {
|
||||
return f.tr
|
||||
}
|
||||
|
||||
func (f *FieldNode) Copy() Node {
|
||||
return &FieldNode{tr: f.tr, NodeType: NodeField, Pos: f.Pos, Ident: append([]string{}, f.Ident...)}
|
||||
}
|
||||
|
||||
// ChainNode holds a term followed by a chain of field accesses (identifier starting with '.').
|
||||
// The names may be chained ('.x.y').
|
||||
// The periods are dropped from each ident.
|
||||
type ChainNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Node Node
|
||||
Field []string // The identifiers in lexical order.
|
||||
}
|
||||
|
||||
func (t *Tree) newChain(pos Pos, node Node) *ChainNode {
|
||||
return &ChainNode{tr: t, NodeType: NodeChain, Pos: pos, Node: node}
|
||||
}
|
||||
|
||||
// Add adds the named field (which should start with a period) to the end of the chain.
|
||||
func (c *ChainNode) Add(field string) {
|
||||
if len(field) == 0 || field[0] != '.' {
|
||||
panic("no dot in field")
|
||||
}
|
||||
field = field[1:] // Remove leading dot.
|
||||
if field == "" {
|
||||
panic("empty field")
|
||||
}
|
||||
c.Field = append(c.Field, field)
|
||||
}
|
||||
|
||||
func (c *ChainNode) String() string {
|
||||
s := c.Node.String()
|
||||
if _, ok := c.Node.(*PipeNode); ok {
|
||||
s = "(" + s + ")"
|
||||
}
|
||||
for _, field := range c.Field {
|
||||
s += "." + field
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (c *ChainNode) tree() *Tree {
|
||||
return c.tr
|
||||
}
|
||||
|
||||
func (c *ChainNode) Copy() Node {
|
||||
return &ChainNode{tr: c.tr, NodeType: NodeChain, Pos: c.Pos, Node: c.Node, Field: append([]string{}, c.Field...)}
|
||||
}
|
||||
|
||||
// BoolNode holds a boolean constant.
|
||||
type BoolNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
True bool // The value of the boolean constant.
|
||||
}
|
||||
|
||||
func (t *Tree) newBool(pos Pos, true bool) *BoolNode {
|
||||
return &BoolNode{tr: t, NodeType: NodeBool, Pos: pos, True: true}
|
||||
}
|
||||
|
||||
func (b *BoolNode) String() string {
|
||||
if b.True {
|
||||
return "true"
|
||||
}
|
||||
return "false"
|
||||
}
|
||||
|
||||
func (b *BoolNode) tree() *Tree {
|
||||
return b.tr
|
||||
}
|
||||
|
||||
func (b *BoolNode) Copy() Node {
|
||||
return b.tr.newBool(b.Pos, b.True)
|
||||
}
|
||||
|
||||
// NumberNode holds a number: signed or unsigned integer, float, or complex.
|
||||
// The value is parsed and stored under all the types that can represent the value.
|
||||
// This simulates in a small amount of code the behavior of Go's ideal constants.
|
||||
type NumberNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
IsInt bool // Number has an integral value.
|
||||
IsUint bool // Number has an unsigned integral value.
|
||||
IsFloat bool // Number has a floating-point value.
|
||||
IsComplex bool // Number is complex.
|
||||
Int64 int64 // The signed integer value.
|
||||
Uint64 uint64 // The unsigned integer value.
|
||||
Float64 float64 // The floating-point value.
|
||||
Complex128 complex128 // The complex value.
|
||||
Text string // The original textual representation from the input.
|
||||
}
|
||||
|
||||
func (t *Tree) newNumber(pos Pos, text string, typ itemType) (*NumberNode, error) {
|
||||
n := &NumberNode{tr: t, NodeType: NodeNumber, Pos: pos, Text: text}
|
||||
switch typ {
|
||||
case itemCharConstant:
|
||||
rune, _, tail, err := strconv.UnquoteChar(text[1:], text[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if tail != "'" {
|
||||
return nil, fmt.Errorf("malformed character constant: %s", text)
|
||||
}
|
||||
n.Int64 = int64(rune)
|
||||
n.IsInt = true
|
||||
n.Uint64 = uint64(rune)
|
||||
n.IsUint = true
|
||||
n.Float64 = float64(rune) // odd but those are the rules.
|
||||
n.IsFloat = true
|
||||
return n, nil
|
||||
case itemComplex:
|
||||
// fmt.Sscan can parse the pair, so let it do the work.
|
||||
if _, err := fmt.Sscan(text, &n.Complex128); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
n.IsComplex = true
|
||||
n.simplifyComplex()
|
||||
return n, nil
|
||||
}
|
||||
// Imaginary constants can only be complex unless they are zero.
|
||||
if len(text) > 0 && text[len(text)-1] == 'i' {
|
||||
f, err := strconv.ParseFloat(text[:len(text)-1], 64)
|
||||
if err == nil {
|
||||
n.IsComplex = true
|
||||
n.Complex128 = complex(0, f)
|
||||
n.simplifyComplex()
|
||||
return n, nil
|
||||
}
|
||||
}
|
||||
// Do integer test first so we get 0x123 etc.
|
||||
u, err := strconv.ParseUint(text, 0, 64) // will fail for -0; fixed below.
|
||||
if err == nil {
|
||||
n.IsUint = true
|
||||
n.Uint64 = u
|
||||
}
|
||||
i, err := strconv.ParseInt(text, 0, 64)
|
||||
if err == nil {
|
||||
n.IsInt = true
|
||||
n.Int64 = i
|
||||
if i == 0 {
|
||||
n.IsUint = true // in case of -0.
|
||||
n.Uint64 = u
|
||||
}
|
||||
}
|
||||
// If an integer extraction succeeded, promote the float.
|
||||
if n.IsInt {
|
||||
n.IsFloat = true
|
||||
n.Float64 = float64(n.Int64)
|
||||
} else if n.IsUint {
|
||||
n.IsFloat = true
|
||||
n.Float64 = float64(n.Uint64)
|
||||
} else {
|
||||
f, err := strconv.ParseFloat(text, 64)
|
||||
if err == nil {
|
||||
n.IsFloat = true
|
||||
n.Float64 = f
|
||||
// If a floating-point extraction succeeded, extract the int if needed.
|
||||
if !n.IsInt && float64(int64(f)) == f {
|
||||
n.IsInt = true
|
||||
n.Int64 = int64(f)
|
||||
}
|
||||
if !n.IsUint && float64(uint64(f)) == f {
|
||||
n.IsUint = true
|
||||
n.Uint64 = uint64(f)
|
||||
}
|
||||
}
|
||||
}
|
||||
if !n.IsInt && !n.IsUint && !n.IsFloat {
|
||||
return nil, fmt.Errorf("illegal number syntax: %q", text)
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// simplifyComplex pulls out any other types that are represented by the complex number.
|
||||
// These all require that the imaginary part be zero.
|
||||
func (n *NumberNode) simplifyComplex() {
|
||||
n.IsFloat = imag(n.Complex128) == 0
|
||||
if n.IsFloat {
|
||||
n.Float64 = real(n.Complex128)
|
||||
n.IsInt = float64(int64(n.Float64)) == n.Float64
|
||||
if n.IsInt {
|
||||
n.Int64 = int64(n.Float64)
|
||||
}
|
||||
n.IsUint = float64(uint64(n.Float64)) == n.Float64
|
||||
if n.IsUint {
|
||||
n.Uint64 = uint64(n.Float64)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (n *NumberNode) String() string {
|
||||
return n.Text
|
||||
}
|
||||
|
||||
func (n *NumberNode) tree() *Tree {
|
||||
return n.tr
|
||||
}
|
||||
|
||||
func (n *NumberNode) Copy() Node {
|
||||
nn := new(NumberNode)
|
||||
*nn = *n // Easy, fast, correct.
|
||||
return nn
|
||||
}
|
||||
|
||||
// StringNode holds a string constant. The value has been "unquoted".
|
||||
type StringNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Quoted string // The original text of the string, with quotes.
|
||||
Text string // The string, after quote processing.
|
||||
}
|
||||
|
||||
func (t *Tree) newString(pos Pos, orig, text string) *StringNode {
|
||||
return &StringNode{tr: t, NodeType: NodeString, Pos: pos, Quoted: orig, Text: text}
|
||||
}
|
||||
|
||||
func (s *StringNode) String() string {
|
||||
return s.Quoted
|
||||
}
|
||||
|
||||
func (s *StringNode) tree() *Tree {
|
||||
return s.tr
|
||||
}
|
||||
|
||||
func (s *StringNode) Copy() Node {
|
||||
return s.tr.newString(s.Pos, s.Quoted, s.Text)
|
||||
}
|
||||
|
||||
// endNode represents an {{end}} action.
|
||||
// It does not appear in the final parse tree.
|
||||
type endNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
}
|
||||
|
||||
func (t *Tree) newEnd(pos Pos) *endNode {
|
||||
return &endNode{tr: t, NodeType: nodeEnd, Pos: pos}
|
||||
}
|
||||
|
||||
func (e *endNode) String() string {
|
||||
return "{{end}}"
|
||||
}
|
||||
|
||||
func (e *endNode) tree() *Tree {
|
||||
return e.tr
|
||||
}
|
||||
|
||||
func (e *endNode) Copy() Node {
|
||||
return e.tr.newEnd(e.Pos)
|
||||
}
|
||||
|
||||
// elseNode represents an {{else}} action. Does not appear in the final tree.
|
||||
type elseNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Line int // The line number in the input (deprecated; kept for compatibility)
|
||||
}
|
||||
|
||||
func (t *Tree) newElse(pos Pos, line int) *elseNode {
|
||||
return &elseNode{tr: t, NodeType: nodeElse, Pos: pos, Line: line}
|
||||
}
|
||||
|
||||
func (e *elseNode) Type() NodeType {
|
||||
return nodeElse
|
||||
}
|
||||
|
||||
func (e *elseNode) String() string {
|
||||
return "{{else}}"
|
||||
}
|
||||
|
||||
func (e *elseNode) tree() *Tree {
|
||||
return e.tr
|
||||
}
|
||||
|
||||
func (e *elseNode) Copy() Node {
|
||||
return e.tr.newElse(e.Pos, e.Line)
|
||||
}
|
||||
|
||||
// BranchNode is the common representation of if, range, and with.
|
||||
type BranchNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Line int // The line number in the input (deprecated; kept for compatibility)
|
||||
Pipe *PipeNode // The pipeline to be evaluated.
|
||||
List *ListNode // What to execute if the value is non-empty.
|
||||
ElseList *ListNode // What to execute if the value is empty (nil if absent).
|
||||
}
|
||||
|
||||
func (b *BranchNode) String() string {
|
||||
name := ""
|
||||
switch b.NodeType {
|
||||
case NodeIf:
|
||||
name = "if"
|
||||
case NodeRange:
|
||||
name = "range"
|
||||
case NodeWith:
|
||||
name = "with"
|
||||
default:
|
||||
panic("unknown branch type")
|
||||
}
|
||||
if b.ElseList != nil {
|
||||
return fmt.Sprintf("{{%s %s}}%s{{else}}%s{{end}}", name, b.Pipe, b.List, b.ElseList)
|
||||
}
|
||||
return fmt.Sprintf("{{%s %s}}%s{{end}}", name, b.Pipe, b.List)
|
||||
}
|
||||
|
||||
func (b *BranchNode) tree() *Tree {
|
||||
return b.tr
|
||||
}
|
||||
|
||||
func (b *BranchNode) Copy() Node {
|
||||
switch b.NodeType {
|
||||
case NodeIf:
|
||||
return b.tr.newIf(b.Pos, b.Line, b.Pipe, b.List, b.ElseList)
|
||||
case NodeRange:
|
||||
return b.tr.newRange(b.Pos, b.Line, b.Pipe, b.List, b.ElseList)
|
||||
case NodeWith:
|
||||
return b.tr.newWith(b.Pos, b.Line, b.Pipe, b.List, b.ElseList)
|
||||
default:
|
||||
panic("unknown branch type")
|
||||
}
|
||||
}
|
||||
|
||||
// IfNode represents an {{if}} action and its commands.
|
||||
type IfNode struct {
|
||||
BranchNode
|
||||
}
|
||||
|
||||
func (t *Tree) newIf(pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) *IfNode {
|
||||
return &IfNode{BranchNode{tr: t, NodeType: NodeIf, Pos: pos, Line: line, Pipe: pipe, List: list, ElseList: elseList}}
|
||||
}
|
||||
|
||||
func (i *IfNode) Copy() Node {
|
||||
return i.tr.newIf(i.Pos, i.Line, i.Pipe.CopyPipe(), i.List.CopyList(), i.ElseList.CopyList())
|
||||
}
|
||||
|
||||
// RangeNode represents a {{range}} action and its commands.
|
||||
type RangeNode struct {
|
||||
BranchNode
|
||||
}
|
||||
|
||||
func (t *Tree) newRange(pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) *RangeNode {
|
||||
return &RangeNode{BranchNode{tr: t, NodeType: NodeRange, Pos: pos, Line: line, Pipe: pipe, List: list, ElseList: elseList}}
|
||||
}
|
||||
|
||||
func (r *RangeNode) Copy() Node {
|
||||
return r.tr.newRange(r.Pos, r.Line, r.Pipe.CopyPipe(), r.List.CopyList(), r.ElseList.CopyList())
|
||||
}
|
||||
|
||||
// WithNode represents a {{with}} action and its commands.
|
||||
type WithNode struct {
|
||||
BranchNode
|
||||
}
|
||||
|
||||
func (t *Tree) newWith(pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) *WithNode {
|
||||
return &WithNode{BranchNode{tr: t, NodeType: NodeWith, Pos: pos, Line: line, Pipe: pipe, List: list, ElseList: elseList}}
|
||||
}
|
||||
|
||||
func (w *WithNode) Copy() Node {
|
||||
return w.tr.newWith(w.Pos, w.Line, w.Pipe.CopyPipe(), w.List.CopyList(), w.ElseList.CopyList())
|
||||
}
|
||||
|
||||
// TemplateNode represents a {{template}} action.
|
||||
type TemplateNode struct {
|
||||
NodeType
|
||||
Pos
|
||||
tr *Tree
|
||||
Line int // The line number in the input (deprecated; kept for compatibility)
|
||||
Name string // The name of the template (unquoted).
|
||||
Pipe *PipeNode // The command to evaluate as dot for the template.
|
||||
}
|
||||
|
||||
func (t *Tree) newTemplate(pos Pos, line int, name string, pipe *PipeNode) *TemplateNode {
|
||||
return &TemplateNode{tr: t, NodeType: NodeTemplate, Pos: pos, Line: line, Name: name, Pipe: pipe}
|
||||
}
|
||||
|
||||
func (t *TemplateNode) String() string {
|
||||
if t.Pipe == nil {
|
||||
return fmt.Sprintf("{{template %q}}", t.Name)
|
||||
}
|
||||
return fmt.Sprintf("{{template %q %s}}", t.Name, t.Pipe)
|
||||
}
|
||||
|
||||
func (t *TemplateNode) tree() *Tree {
|
||||
return t.tr
|
||||
}
|
||||
|
||||
func (t *TemplateNode) Copy() Node {
|
||||
return t.tr.newTemplate(t.Pos, t.Line, t.Name, t.Pipe.CopyPipe())
|
||||
}
|
||||
700
vendor/github.com/alecthomas/template/parse/parse.go
generated
vendored
Normal file
700
vendor/github.com/alecthomas/template/parse/parse.go
generated
vendored
Normal file
@ -0,0 +1,700 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package parse builds parse trees for templates as defined by text/template
|
||||
// and html/template. Clients should use those packages to construct templates
|
||||
// rather than this one, which provides shared internal data structures not
|
||||
// intended for general use.
|
||||
package parse
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Tree is the representation of a single parsed template.
|
||||
type Tree struct {
|
||||
Name string // name of the template represented by the tree.
|
||||
ParseName string // name of the top-level template during parsing, for error messages.
|
||||
Root *ListNode // top-level root of the tree.
|
||||
text string // text parsed to create the template (or its parent)
|
||||
// Parsing only; cleared after parse.
|
||||
funcs []map[string]interface{}
|
||||
lex *lexer
|
||||
token [3]item // three-token lookahead for parser.
|
||||
peekCount int
|
||||
vars []string // variables defined at the moment.
|
||||
}
|
||||
|
||||
// Copy returns a copy of the Tree. Any parsing state is discarded.
|
||||
func (t *Tree) Copy() *Tree {
|
||||
if t == nil {
|
||||
return nil
|
||||
}
|
||||
return &Tree{
|
||||
Name: t.Name,
|
||||
ParseName: t.ParseName,
|
||||
Root: t.Root.CopyList(),
|
||||
text: t.text,
|
||||
}
|
||||
}
|
||||
|
||||
// Parse returns a map from template name to parse.Tree, created by parsing the
|
||||
// templates described in the argument string. The top-level template will be
|
||||
// given the specified name. If an error is encountered, parsing stops and an
|
||||
// empty map is returned with the error.
|
||||
func Parse(name, text, leftDelim, rightDelim string, funcs ...map[string]interface{}) (treeSet map[string]*Tree, err error) {
|
||||
treeSet = make(map[string]*Tree)
|
||||
t := New(name)
|
||||
t.text = text
|
||||
_, err = t.Parse(text, leftDelim, rightDelim, treeSet, funcs...)
|
||||
return
|
||||
}
|
||||
|
||||
// next returns the next token.
|
||||
func (t *Tree) next() item {
|
||||
if t.peekCount > 0 {
|
||||
t.peekCount--
|
||||
} else {
|
||||
t.token[0] = t.lex.nextItem()
|
||||
}
|
||||
return t.token[t.peekCount]
|
||||
}
|
||||
|
||||
// backup backs the input stream up one token.
|
||||
func (t *Tree) backup() {
|
||||
t.peekCount++
|
||||
}
|
||||
|
||||
// backup2 backs the input stream up two tokens.
|
||||
// The zeroth token is already there.
|
||||
func (t *Tree) backup2(t1 item) {
|
||||
t.token[1] = t1
|
||||
t.peekCount = 2
|
||||
}
|
||||
|
||||
// backup3 backs the input stream up three tokens
|
||||
// The zeroth token is already there.
|
||||
func (t *Tree) backup3(t2, t1 item) { // Reverse order: we're pushing back.
|
||||
t.token[1] = t1
|
||||
t.token[2] = t2
|
||||
t.peekCount = 3
|
||||
}
|
||||
|
||||
// peek returns but does not consume the next token.
|
||||
func (t *Tree) peek() item {
|
||||
if t.peekCount > 0 {
|
||||
return t.token[t.peekCount-1]
|
||||
}
|
||||
t.peekCount = 1
|
||||
t.token[0] = t.lex.nextItem()
|
||||
return t.token[0]
|
||||
}
|
||||
|
||||
// nextNonSpace returns the next non-space token.
|
||||
func (t *Tree) nextNonSpace() (token item) {
|
||||
for {
|
||||
token = t.next()
|
||||
if token.typ != itemSpace {
|
||||
break
|
||||
}
|
||||
}
|
||||
return token
|
||||
}
|
||||
|
||||
// peekNonSpace returns but does not consume the next non-space token.
|
||||
func (t *Tree) peekNonSpace() (token item) {
|
||||
for {
|
||||
token = t.next()
|
||||
if token.typ != itemSpace {
|
||||
break
|
||||
}
|
||||
}
|
||||
t.backup()
|
||||
return token
|
||||
}
|
||||
|
||||
// Parsing.
|
||||
|
||||
// New allocates a new parse tree with the given name.
|
||||
func New(name string, funcs ...map[string]interface{}) *Tree {
|
||||
return &Tree{
|
||||
Name: name,
|
||||
funcs: funcs,
|
||||
}
|
||||
}
|
||||
|
||||
// ErrorContext returns a textual representation of the location of the node in the input text.
|
||||
// The receiver is only used when the node does not have a pointer to the tree inside,
|
||||
// which can occur in old code.
|
||||
func (t *Tree) ErrorContext(n Node) (location, context string) {
|
||||
pos := int(n.Position())
|
||||
tree := n.tree()
|
||||
if tree == nil {
|
||||
tree = t
|
||||
}
|
||||
text := tree.text[:pos]
|
||||
byteNum := strings.LastIndex(text, "\n")
|
||||
if byteNum == -1 {
|
||||
byteNum = pos // On first line.
|
||||
} else {
|
||||
byteNum++ // After the newline.
|
||||
byteNum = pos - byteNum
|
||||
}
|
||||
lineNum := 1 + strings.Count(text, "\n")
|
||||
context = n.String()
|
||||
if len(context) > 20 {
|
||||
context = fmt.Sprintf("%.20s...", context)
|
||||
}
|
||||
return fmt.Sprintf("%s:%d:%d", tree.ParseName, lineNum, byteNum), context
|
||||
}
|
||||
|
||||
// errorf formats the error and terminates processing.
|
||||
func (t *Tree) errorf(format string, args ...interface{}) {
|
||||
t.Root = nil
|
||||
format = fmt.Sprintf("template: %s:%d: %s", t.ParseName, t.lex.lineNumber(), format)
|
||||
panic(fmt.Errorf(format, args...))
|
||||
}
|
||||
|
||||
// error terminates processing.
|
||||
func (t *Tree) error(err error) {
|
||||
t.errorf("%s", err)
|
||||
}
|
||||
|
||||
// expect consumes the next token and guarantees it has the required type.
|
||||
func (t *Tree) expect(expected itemType, context string) item {
|
||||
token := t.nextNonSpace()
|
||||
if token.typ != expected {
|
||||
t.unexpected(token, context)
|
||||
}
|
||||
return token
|
||||
}
|
||||
|
||||
// expectOneOf consumes the next token and guarantees it has one of the required types.
|
||||
func (t *Tree) expectOneOf(expected1, expected2 itemType, context string) item {
|
||||
token := t.nextNonSpace()
|
||||
if token.typ != expected1 && token.typ != expected2 {
|
||||
t.unexpected(token, context)
|
||||
}
|
||||
return token
|
||||
}
|
||||
|
||||
// unexpected complains about the token and terminates processing.
|
||||
func (t *Tree) unexpected(token item, context string) {
|
||||
t.errorf("unexpected %s in %s", token, context)
|
||||
}
|
||||
|
||||
// recover is the handler that turns panics into returns from the top level of Parse.
|
||||
func (t *Tree) recover(errp *error) {
|
||||
e := recover()
|
||||
if e != nil {
|
||||
if _, ok := e.(runtime.Error); ok {
|
||||
panic(e)
|
||||
}
|
||||
if t != nil {
|
||||
t.stopParse()
|
||||
}
|
||||
*errp = e.(error)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// startParse initializes the parser, using the lexer.
|
||||
func (t *Tree) startParse(funcs []map[string]interface{}, lex *lexer) {
|
||||
t.Root = nil
|
||||
t.lex = lex
|
||||
t.vars = []string{"$"}
|
||||
t.funcs = funcs
|
||||
}
|
||||
|
||||
// stopParse terminates parsing.
|
||||
func (t *Tree) stopParse() {
|
||||
t.lex = nil
|
||||
t.vars = nil
|
||||
t.funcs = nil
|
||||
}
|
||||
|
||||
// Parse parses the template definition string to construct a representation of
|
||||
// the template for execution. If either action delimiter string is empty, the
|
||||
// default ("{{" or "}}") is used. Embedded template definitions are added to
|
||||
// the treeSet map.
|
||||
func (t *Tree) Parse(text, leftDelim, rightDelim string, treeSet map[string]*Tree, funcs ...map[string]interface{}) (tree *Tree, err error) {
|
||||
defer t.recover(&err)
|
||||
t.ParseName = t.Name
|
||||
t.startParse(funcs, lex(t.Name, text, leftDelim, rightDelim))
|
||||
t.text = text
|
||||
t.parse(treeSet)
|
||||
t.add(treeSet)
|
||||
t.stopParse()
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// add adds tree to the treeSet.
|
||||
func (t *Tree) add(treeSet map[string]*Tree) {
|
||||
tree := treeSet[t.Name]
|
||||
if tree == nil || IsEmptyTree(tree.Root) {
|
||||
treeSet[t.Name] = t
|
||||
return
|
||||
}
|
||||
if !IsEmptyTree(t.Root) {
|
||||
t.errorf("template: multiple definition of template %q", t.Name)
|
||||
}
|
||||
}
|
||||
|
||||
// IsEmptyTree reports whether this tree (node) is empty of everything but space.
|
||||
func IsEmptyTree(n Node) bool {
|
||||
switch n := n.(type) {
|
||||
case nil:
|
||||
return true
|
||||
case *ActionNode:
|
||||
case *IfNode:
|
||||
case *ListNode:
|
||||
for _, node := range n.Nodes {
|
||||
if !IsEmptyTree(node) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
case *RangeNode:
|
||||
case *TemplateNode:
|
||||
case *TextNode:
|
||||
return len(bytes.TrimSpace(n.Text)) == 0
|
||||
case *WithNode:
|
||||
default:
|
||||
panic("unknown node: " + n.String())
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// parse is the top-level parser for a template, essentially the same
|
||||
// as itemList except it also parses {{define}} actions.
|
||||
// It runs to EOF.
|
||||
func (t *Tree) parse(treeSet map[string]*Tree) (next Node) {
|
||||
t.Root = t.newList(t.peek().pos)
|
||||
for t.peek().typ != itemEOF {
|
||||
if t.peek().typ == itemLeftDelim {
|
||||
delim := t.next()
|
||||
if t.nextNonSpace().typ == itemDefine {
|
||||
newT := New("definition") // name will be updated once we know it.
|
||||
newT.text = t.text
|
||||
newT.ParseName = t.ParseName
|
||||
newT.startParse(t.funcs, t.lex)
|
||||
newT.parseDefinition(treeSet)
|
||||
continue
|
||||
}
|
||||
t.backup2(delim)
|
||||
}
|
||||
n := t.textOrAction()
|
||||
if n.Type() == nodeEnd {
|
||||
t.errorf("unexpected %s", n)
|
||||
}
|
||||
t.Root.append(n)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// parseDefinition parses a {{define}} ... {{end}} template definition and
|
||||
// installs the definition in the treeSet map. The "define" keyword has already
|
||||
// been scanned.
|
||||
func (t *Tree) parseDefinition(treeSet map[string]*Tree) {
|
||||
const context = "define clause"
|
||||
name := t.expectOneOf(itemString, itemRawString, context)
|
||||
var err error
|
||||
t.Name, err = strconv.Unquote(name.val)
|
||||
if err != nil {
|
||||
t.error(err)
|
||||
}
|
||||
t.expect(itemRightDelim, context)
|
||||
var end Node
|
||||
t.Root, end = t.itemList()
|
||||
if end.Type() != nodeEnd {
|
||||
t.errorf("unexpected %s in %s", end, context)
|
||||
}
|
||||
t.add(treeSet)
|
||||
t.stopParse()
|
||||
}
|
||||
|
||||
// itemList:
|
||||
// textOrAction*
|
||||
// Terminates at {{end}} or {{else}}, returned separately.
|
||||
func (t *Tree) itemList() (list *ListNode, next Node) {
|
||||
list = t.newList(t.peekNonSpace().pos)
|
||||
for t.peekNonSpace().typ != itemEOF {
|
||||
n := t.textOrAction()
|
||||
switch n.Type() {
|
||||
case nodeEnd, nodeElse:
|
||||
return list, n
|
||||
}
|
||||
list.append(n)
|
||||
}
|
||||
t.errorf("unexpected EOF")
|
||||
return
|
||||
}
|
||||
|
||||
// textOrAction:
|
||||
// text | action
|
||||
func (t *Tree) textOrAction() Node {
|
||||
switch token := t.nextNonSpace(); token.typ {
|
||||
case itemElideNewline:
|
||||
return t.elideNewline()
|
||||
case itemText:
|
||||
return t.newText(token.pos, token.val)
|
||||
case itemLeftDelim:
|
||||
return t.action()
|
||||
default:
|
||||
t.unexpected(token, "input")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// elideNewline:
|
||||
// Remove newlines trailing rightDelim if \\ is present.
|
||||
func (t *Tree) elideNewline() Node {
|
||||
token := t.peek()
|
||||
if token.typ != itemText {
|
||||
t.unexpected(token, "input")
|
||||
return nil
|
||||
}
|
||||
|
||||
t.next()
|
||||
stripped := strings.TrimLeft(token.val, "\n\r")
|
||||
diff := len(token.val) - len(stripped)
|
||||
if diff > 0 {
|
||||
// This is a bit nasty. We mutate the token in-place to remove
|
||||
// preceding newlines.
|
||||
token.pos += Pos(diff)
|
||||
token.val = stripped
|
||||
}
|
||||
return t.newText(token.pos, token.val)
|
||||
}
|
||||
|
||||
// Action:
|
||||
// control
|
||||
// command ("|" command)*
|
||||
// Left delim is past. Now get actions.
|
||||
// First word could be a keyword such as range.
|
||||
func (t *Tree) action() (n Node) {
|
||||
switch token := t.nextNonSpace(); token.typ {
|
||||
case itemElse:
|
||||
return t.elseControl()
|
||||
case itemEnd:
|
||||
return t.endControl()
|
||||
case itemIf:
|
||||
return t.ifControl()
|
||||
case itemRange:
|
||||
return t.rangeControl()
|
||||
case itemTemplate:
|
||||
return t.templateControl()
|
||||
case itemWith:
|
||||
return t.withControl()
|
||||
}
|
||||
t.backup()
|
||||
// Do not pop variables; they persist until "end".
|
||||
return t.newAction(t.peek().pos, t.lex.lineNumber(), t.pipeline("command"))
|
||||
}
|
||||
|
||||
// Pipeline:
|
||||
// declarations? command ('|' command)*
|
||||
func (t *Tree) pipeline(context string) (pipe *PipeNode) {
|
||||
var decl []*VariableNode
|
||||
pos := t.peekNonSpace().pos
|
||||
// Are there declarations?
|
||||
for {
|
||||
if v := t.peekNonSpace(); v.typ == itemVariable {
|
||||
t.next()
|
||||
// Since space is a token, we need 3-token look-ahead here in the worst case:
|
||||
// in "$x foo" we need to read "foo" (as opposed to ":=") to know that $x is an
|
||||
// argument variable rather than a declaration. So remember the token
|
||||
// adjacent to the variable so we can push it back if necessary.
|
||||
tokenAfterVariable := t.peek()
|
||||
if next := t.peekNonSpace(); next.typ == itemColonEquals || (next.typ == itemChar && next.val == ",") {
|
||||
t.nextNonSpace()
|
||||
variable := t.newVariable(v.pos, v.val)
|
||||
decl = append(decl, variable)
|
||||
t.vars = append(t.vars, v.val)
|
||||
if next.typ == itemChar && next.val == "," {
|
||||
if context == "range" && len(decl) < 2 {
|
||||
continue
|
||||
}
|
||||
t.errorf("too many declarations in %s", context)
|
||||
}
|
||||
} else if tokenAfterVariable.typ == itemSpace {
|
||||
t.backup3(v, tokenAfterVariable)
|
||||
} else {
|
||||
t.backup2(v)
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
pipe = t.newPipeline(pos, t.lex.lineNumber(), decl)
|
||||
for {
|
||||
switch token := t.nextNonSpace(); token.typ {
|
||||
case itemRightDelim, itemRightParen:
|
||||
if len(pipe.Cmds) == 0 {
|
||||
t.errorf("missing value for %s", context)
|
||||
}
|
||||
if token.typ == itemRightParen {
|
||||
t.backup()
|
||||
}
|
||||
return
|
||||
case itemBool, itemCharConstant, itemComplex, itemDot, itemField, itemIdentifier,
|
||||
itemNumber, itemNil, itemRawString, itemString, itemVariable, itemLeftParen:
|
||||
t.backup()
|
||||
pipe.append(t.command())
|
||||
default:
|
||||
t.unexpected(token, context)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *Tree) parseControl(allowElseIf bool, context string) (pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) {
|
||||
defer t.popVars(len(t.vars))
|
||||
line = t.lex.lineNumber()
|
||||
pipe = t.pipeline(context)
|
||||
var next Node
|
||||
list, next = t.itemList()
|
||||
switch next.Type() {
|
||||
case nodeEnd: //done
|
||||
case nodeElse:
|
||||
if allowElseIf {
|
||||
// Special case for "else if". If the "else" is followed immediately by an "if",
|
||||
// the elseControl will have left the "if" token pending. Treat
|
||||
// {{if a}}_{{else if b}}_{{end}}
|
||||
// as
|
||||
// {{if a}}_{{else}}{{if b}}_{{end}}{{end}}.
|
||||
// To do this, parse the if as usual and stop at it {{end}}; the subsequent{{end}}
|
||||
// is assumed. This technique works even for long if-else-if chains.
|
||||
// TODO: Should we allow else-if in with and range?
|
||||
if t.peek().typ == itemIf {
|
||||
t.next() // Consume the "if" token.
|
||||
elseList = t.newList(next.Position())
|
||||
elseList.append(t.ifControl())
|
||||
// Do not consume the next item - only one {{end}} required.
|
||||
break
|
||||
}
|
||||
}
|
||||
elseList, next = t.itemList()
|
||||
if next.Type() != nodeEnd {
|
||||
t.errorf("expected end; found %s", next)
|
||||
}
|
||||
}
|
||||
return pipe.Position(), line, pipe, list, elseList
|
||||
}
|
||||
|
||||
// If:
|
||||
// {{if pipeline}} itemList {{end}}
|
||||
// {{if pipeline}} itemList {{else}} itemList {{end}}
|
||||
// If keyword is past.
|
||||
func (t *Tree) ifControl() Node {
|
||||
return t.newIf(t.parseControl(true, "if"))
|
||||
}
|
||||
|
||||
// Range:
|
||||
// {{range pipeline}} itemList {{end}}
|
||||
// {{range pipeline}} itemList {{else}} itemList {{end}}
|
||||
// Range keyword is past.
|
||||
func (t *Tree) rangeControl() Node {
|
||||
return t.newRange(t.parseControl(false, "range"))
|
||||
}
|
||||
|
||||
// With:
|
||||
// {{with pipeline}} itemList {{end}}
|
||||
// {{with pipeline}} itemList {{else}} itemList {{end}}
|
||||
// If keyword is past.
|
||||
func (t *Tree) withControl() Node {
|
||||
return t.newWith(t.parseControl(false, "with"))
|
||||
}
|
||||
|
||||
// End:
|
||||
// {{end}}
|
||||
// End keyword is past.
|
||||
func (t *Tree) endControl() Node {
|
||||
return t.newEnd(t.expect(itemRightDelim, "end").pos)
|
||||
}
|
||||
|
||||
// Else:
|
||||
// {{else}}
|
||||
// Else keyword is past.
|
||||
func (t *Tree) elseControl() Node {
|
||||
// Special case for "else if".
|
||||
peek := t.peekNonSpace()
|
||||
if peek.typ == itemIf {
|
||||
// We see "{{else if ... " but in effect rewrite it to {{else}}{{if ... ".
|
||||
return t.newElse(peek.pos, t.lex.lineNumber())
|
||||
}
|
||||
return t.newElse(t.expect(itemRightDelim, "else").pos, t.lex.lineNumber())
|
||||
}
|
||||
|
||||
// Template:
|
||||
// {{template stringValue pipeline}}
|
||||
// Template keyword is past. The name must be something that can evaluate
|
||||
// to a string.
|
||||
func (t *Tree) templateControl() Node {
|
||||
var name string
|
||||
token := t.nextNonSpace()
|
||||
switch token.typ {
|
||||
case itemString, itemRawString:
|
||||
s, err := strconv.Unquote(token.val)
|
||||
if err != nil {
|
||||
t.error(err)
|
||||
}
|
||||
name = s
|
||||
default:
|
||||
t.unexpected(token, "template invocation")
|
||||
}
|
||||
var pipe *PipeNode
|
||||
if t.nextNonSpace().typ != itemRightDelim {
|
||||
t.backup()
|
||||
// Do not pop variables; they persist until "end".
|
||||
pipe = t.pipeline("template")
|
||||
}
|
||||
return t.newTemplate(token.pos, t.lex.lineNumber(), name, pipe)
|
||||
}
|
||||
|
||||
// command:
|
||||
// operand (space operand)*
|
||||
// space-separated arguments up to a pipeline character or right delimiter.
|
||||
// we consume the pipe character but leave the right delim to terminate the action.
|
||||
func (t *Tree) command() *CommandNode {
|
||||
cmd := t.newCommand(t.peekNonSpace().pos)
|
||||
for {
|
||||
t.peekNonSpace() // skip leading spaces.
|
||||
operand := t.operand()
|
||||
if operand != nil {
|
||||
cmd.append(operand)
|
||||
}
|
||||
switch token := t.next(); token.typ {
|
||||
case itemSpace:
|
||||
continue
|
||||
case itemError:
|
||||
t.errorf("%s", token.val)
|
||||
case itemRightDelim, itemRightParen:
|
||||
t.backup()
|
||||
case itemPipe:
|
||||
default:
|
||||
t.errorf("unexpected %s in operand; missing space?", token)
|
||||
}
|
||||
break
|
||||
}
|
||||
if len(cmd.Args) == 0 {
|
||||
t.errorf("empty command")
|
||||
}
|
||||
return cmd
|
||||
}
|
||||
|
||||
// operand:
|
||||
// term .Field*
|
||||
// An operand is a space-separated component of a command,
|
||||
// a term possibly followed by field accesses.
|
||||
// A nil return means the next item is not an operand.
|
||||
func (t *Tree) operand() Node {
|
||||
node := t.term()
|
||||
if node == nil {
|
||||
return nil
|
||||
}
|
||||
if t.peek().typ == itemField {
|
||||
chain := t.newChain(t.peek().pos, node)
|
||||
for t.peek().typ == itemField {
|
||||
chain.Add(t.next().val)
|
||||
}
|
||||
// Compatibility with original API: If the term is of type NodeField
|
||||
// or NodeVariable, just put more fields on the original.
|
||||
// Otherwise, keep the Chain node.
|
||||
// TODO: Switch to Chains always when we can.
|
||||
switch node.Type() {
|
||||
case NodeField:
|
||||
node = t.newField(chain.Position(), chain.String())
|
||||
case NodeVariable:
|
||||
node = t.newVariable(chain.Position(), chain.String())
|
||||
default:
|
||||
node = chain
|
||||
}
|
||||
}
|
||||
return node
|
||||
}
|
||||
|
||||
// term:
|
||||
// literal (number, string, nil, boolean)
|
||||
// function (identifier)
|
||||
// .
|
||||
// .Field
|
||||
// $
|
||||
// '(' pipeline ')'
|
||||
// A term is a simple "expression".
|
||||
// A nil return means the next item is not a term.
|
||||
func (t *Tree) term() Node {
|
||||
switch token := t.nextNonSpace(); token.typ {
|
||||
case itemError:
|
||||
t.errorf("%s", token.val)
|
||||
case itemIdentifier:
|
||||
if !t.hasFunction(token.val) {
|
||||
t.errorf("function %q not defined", token.val)
|
||||
}
|
||||
return NewIdentifier(token.val).SetTree(t).SetPos(token.pos)
|
||||
case itemDot:
|
||||
return t.newDot(token.pos)
|
||||
case itemNil:
|
||||
return t.newNil(token.pos)
|
||||
case itemVariable:
|
||||
return t.useVar(token.pos, token.val)
|
||||
case itemField:
|
||||
return t.newField(token.pos, token.val)
|
||||
case itemBool:
|
||||
return t.newBool(token.pos, token.val == "true")
|
||||
case itemCharConstant, itemComplex, itemNumber:
|
||||
number, err := t.newNumber(token.pos, token.val, token.typ)
|
||||
if err != nil {
|
||||
t.error(err)
|
||||
}
|
||||
return number
|
||||
case itemLeftParen:
|
||||
pipe := t.pipeline("parenthesized pipeline")
|
||||
if token := t.next(); token.typ != itemRightParen {
|
||||
t.errorf("unclosed right paren: unexpected %s", token)
|
||||
}
|
||||
return pipe
|
||||
case itemString, itemRawString:
|
||||
s, err := strconv.Unquote(token.val)
|
||||
if err != nil {
|
||||
t.error(err)
|
||||
}
|
||||
return t.newString(token.pos, token.val, s)
|
||||
}
|
||||
t.backup()
|
||||
return nil
|
||||
}
|
||||
|
||||
// hasFunction reports if a function name exists in the Tree's maps.
|
||||
func (t *Tree) hasFunction(name string) bool {
|
||||
for _, funcMap := range t.funcs {
|
||||
if funcMap == nil {
|
||||
continue
|
||||
}
|
||||
if funcMap[name] != nil {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// popVars trims the variable list to the specified length
|
||||
func (t *Tree) popVars(n int) {
|
||||
t.vars = t.vars[:n]
|
||||
}
|
||||
|
||||
// useVar returns a node for a variable reference. It errors if the
|
||||
// variable is not defined.
|
||||
func (t *Tree) useVar(pos Pos, name string) Node {
|
||||
v := t.newVariable(pos, name)
|
||||
for _, varName := range t.vars {
|
||||
if varName == v.Ident[0] {
|
||||
return v
|
||||
}
|
||||
}
|
||||
t.errorf("undefined variable %q", v.Ident[0])
|
||||
return nil
|
||||
}
|
||||
218
vendor/github.com/alecthomas/template/template.go
generated
vendored
Normal file
218
vendor/github.com/alecthomas/template/template.go
generated
vendored
Normal file
@ -0,0 +1,218 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package template
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
|
||||
"github.com/alecthomas/template/parse"
|
||||
)
|
||||
|
||||
// common holds the information shared by related templates.
|
||||
type common struct {
|
||||
tmpl map[string]*Template
|
||||
// We use two maps, one for parsing and one for execution.
|
||||
// This separation makes the API cleaner since it doesn't
|
||||
// expose reflection to the client.
|
||||
parseFuncs FuncMap
|
||||
execFuncs map[string]reflect.Value
|
||||
}
|
||||
|
||||
// Template is the representation of a parsed template. The *parse.Tree
|
||||
// field is exported only for use by html/template and should be treated
|
||||
// as unexported by all other clients.
|
||||
type Template struct {
|
||||
name string
|
||||
*parse.Tree
|
||||
*common
|
||||
leftDelim string
|
||||
rightDelim string
|
||||
}
|
||||
|
||||
// New allocates a new template with the given name.
|
||||
func New(name string) *Template {
|
||||
return &Template{
|
||||
name: name,
|
||||
}
|
||||
}
|
||||
|
||||
// Name returns the name of the template.
|
||||
func (t *Template) Name() string {
|
||||
return t.name
|
||||
}
|
||||
|
||||
// New allocates a new template associated with the given one and with the same
|
||||
// delimiters. The association, which is transitive, allows one template to
|
||||
// invoke another with a {{template}} action.
|
||||
func (t *Template) New(name string) *Template {
|
||||
t.init()
|
||||
return &Template{
|
||||
name: name,
|
||||
common: t.common,
|
||||
leftDelim: t.leftDelim,
|
||||
rightDelim: t.rightDelim,
|
||||
}
|
||||
}
|
||||
|
||||
func (t *Template) init() {
|
||||
if t.common == nil {
|
||||
t.common = new(common)
|
||||
t.tmpl = make(map[string]*Template)
|
||||
t.parseFuncs = make(FuncMap)
|
||||
t.execFuncs = make(map[string]reflect.Value)
|
||||
}
|
||||
}
|
||||
|
||||
// Clone returns a duplicate of the template, including all associated
|
||||
// templates. The actual representation is not copied, but the name space of
|
||||
// associated templates is, so further calls to Parse in the copy will add
|
||||
// templates to the copy but not to the original. Clone can be used to prepare
|
||||
// common templates and use them with variant definitions for other templates
|
||||
// by adding the variants after the clone is made.
|
||||
func (t *Template) Clone() (*Template, error) {
|
||||
nt := t.copy(nil)
|
||||
nt.init()
|
||||
nt.tmpl[t.name] = nt
|
||||
for k, v := range t.tmpl {
|
||||
if k == t.name { // Already installed.
|
||||
continue
|
||||
}
|
||||
// The associated templates share nt's common structure.
|
||||
tmpl := v.copy(nt.common)
|
||||
nt.tmpl[k] = tmpl
|
||||
}
|
||||
for k, v := range t.parseFuncs {
|
||||
nt.parseFuncs[k] = v
|
||||
}
|
||||
for k, v := range t.execFuncs {
|
||||
nt.execFuncs[k] = v
|
||||
}
|
||||
return nt, nil
|
||||
}
|
||||
|
||||
// copy returns a shallow copy of t, with common set to the argument.
|
||||
func (t *Template) copy(c *common) *Template {
|
||||
nt := New(t.name)
|
||||
nt.Tree = t.Tree
|
||||
nt.common = c
|
||||
nt.leftDelim = t.leftDelim
|
||||
nt.rightDelim = t.rightDelim
|
||||
return nt
|
||||
}
|
||||
|
||||
// AddParseTree creates a new template with the name and parse tree
|
||||
// and associates it with t.
|
||||
func (t *Template) AddParseTree(name string, tree *parse.Tree) (*Template, error) {
|
||||
if t.common != nil && t.tmpl[name] != nil {
|
||||
return nil, fmt.Errorf("template: redefinition of template %q", name)
|
||||
}
|
||||
nt := t.New(name)
|
||||
nt.Tree = tree
|
||||
t.tmpl[name] = nt
|
||||
return nt, nil
|
||||
}
|
||||
|
||||
// Templates returns a slice of the templates associated with t, including t
|
||||
// itself.
|
||||
func (t *Template) Templates() []*Template {
|
||||
if t.common == nil {
|
||||
return nil
|
||||
}
|
||||
// Return a slice so we don't expose the map.
|
||||
m := make([]*Template, 0, len(t.tmpl))
|
||||
for _, v := range t.tmpl {
|
||||
m = append(m, v)
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
// Delims sets the action delimiters to the specified strings, to be used in
|
||||
// subsequent calls to Parse, ParseFiles, or ParseGlob. Nested template
|
||||
// definitions will inherit the settings. An empty delimiter stands for the
|
||||
// corresponding default: {{ or }}.
|
||||
// The return value is the template, so calls can be chained.
|
||||
func (t *Template) Delims(left, right string) *Template {
|
||||
t.leftDelim = left
|
||||
t.rightDelim = right
|
||||
return t
|
||||
}
|
||||
|
||||
// Funcs adds the elements of the argument map to the template's function map.
|
||||
// It panics if a value in the map is not a function with appropriate return
|
||||
// type. However, it is legal to overwrite elements of the map. The return
|
||||
// value is the template, so calls can be chained.
|
||||
func (t *Template) Funcs(funcMap FuncMap) *Template {
|
||||
t.init()
|
||||
addValueFuncs(t.execFuncs, funcMap)
|
||||
addFuncs(t.parseFuncs, funcMap)
|
||||
return t
|
||||
}
|
||||
|
||||
// Lookup returns the template with the given name that is associated with t,
|
||||
// or nil if there is no such template.
|
||||
func (t *Template) Lookup(name string) *Template {
|
||||
if t.common == nil {
|
||||
return nil
|
||||
}
|
||||
return t.tmpl[name]
|
||||
}
|
||||
|
||||
// Parse parses a string into a template. Nested template definitions will be
|
||||
// associated with the top-level template t. Parse may be called multiple times
|
||||
// to parse definitions of templates to associate with t. It is an error if a
|
||||
// resulting template is non-empty (contains content other than template
|
||||
// definitions) and would replace a non-empty template with the same name.
|
||||
// (In multiple calls to Parse with the same receiver template, only one call
|
||||
// can contain text other than space, comments, and template definitions.)
|
||||
func (t *Template) Parse(text string) (*Template, error) {
|
||||
t.init()
|
||||
trees, err := parse.Parse(t.name, text, t.leftDelim, t.rightDelim, t.parseFuncs, builtins)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Add the newly parsed trees, including the one for t, into our common structure.
|
||||
for name, tree := range trees {
|
||||
// If the name we parsed is the name of this template, overwrite this template.
|
||||
// The associate method checks it's not a redefinition.
|
||||
tmpl := t
|
||||
if name != t.name {
|
||||
tmpl = t.New(name)
|
||||
}
|
||||
// Even if t == tmpl, we need to install it in the common.tmpl map.
|
||||
if replace, err := t.associate(tmpl, tree); err != nil {
|
||||
return nil, err
|
||||
} else if replace {
|
||||
tmpl.Tree = tree
|
||||
}
|
||||
tmpl.leftDelim = t.leftDelim
|
||||
tmpl.rightDelim = t.rightDelim
|
||||
}
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// associate installs the new template into the group of templates associated
|
||||
// with t. It is an error to reuse a name except to overwrite an empty
|
||||
// template. The two are already known to share the common structure.
|
||||
// The boolean return value reports wither to store this tree as t.Tree.
|
||||
func (t *Template) associate(new *Template, tree *parse.Tree) (bool, error) {
|
||||
if new.common != t.common {
|
||||
panic("internal error: associate not common")
|
||||
}
|
||||
name := new.name
|
||||
if old := t.tmpl[name]; old != nil {
|
||||
oldIsEmpty := parse.IsEmptyTree(old.Root)
|
||||
newIsEmpty := parse.IsEmptyTree(tree.Root)
|
||||
if newIsEmpty {
|
||||
// Whether old is empty or not, new is empty; no reason to replace old.
|
||||
return false, nil
|
||||
}
|
||||
if !oldIsEmpty {
|
||||
return false, fmt.Errorf("template: redefinition of template %q", name)
|
||||
}
|
||||
}
|
||||
t.tmpl[name] = new
|
||||
return true, nil
|
||||
}
|
||||
19
vendor/github.com/alecthomas/units/COPYING
generated
vendored
Normal file
19
vendor/github.com/alecthomas/units/COPYING
generated
vendored
Normal file
@ -0,0 +1,19 @@
|
||||
Copyright (C) 2014 Alec Thomas
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||
of the Software, and to permit persons to whom the Software is furnished to do
|
||||
so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
11
vendor/github.com/alecthomas/units/README.md
generated
vendored
Normal file
11
vendor/github.com/alecthomas/units/README.md
generated
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
# Units - Helpful unit multipliers and functions for Go
|
||||
|
||||
The goal of this package is to have functionality similar to the [time](http://golang.org/pkg/time/) package.
|
||||
|
||||
It allows for code like this:
|
||||
|
||||
```go
|
||||
n, err := ParseBase2Bytes("1KB")
|
||||
// n == 1024
|
||||
n = units.Mebibyte * 512
|
||||
```
|
||||
83
vendor/github.com/alecthomas/units/bytes.go
generated
vendored
Normal file
83
vendor/github.com/alecthomas/units/bytes.go
generated
vendored
Normal file
@ -0,0 +1,83 @@
|
||||
package units
|
||||
|
||||
// Base2Bytes is the old non-SI power-of-2 byte scale (1024 bytes in a kilobyte,
|
||||
// etc.).
|
||||
type Base2Bytes int64
|
||||
|
||||
// Base-2 byte units.
|
||||
const (
|
||||
Kibibyte Base2Bytes = 1024
|
||||
KiB = Kibibyte
|
||||
Mebibyte = Kibibyte * 1024
|
||||
MiB = Mebibyte
|
||||
Gibibyte = Mebibyte * 1024
|
||||
GiB = Gibibyte
|
||||
Tebibyte = Gibibyte * 1024
|
||||
TiB = Tebibyte
|
||||
Pebibyte = Tebibyte * 1024
|
||||
PiB = Pebibyte
|
||||
Exbibyte = Pebibyte * 1024
|
||||
EiB = Exbibyte
|
||||
)
|
||||
|
||||
var (
|
||||
bytesUnitMap = MakeUnitMap("iB", "B", 1024)
|
||||
oldBytesUnitMap = MakeUnitMap("B", "B", 1024)
|
||||
)
|
||||
|
||||
// ParseBase2Bytes supports both iB and B in base-2 multipliers. That is, KB
|
||||
// and KiB are both 1024.
|
||||
func ParseBase2Bytes(s string) (Base2Bytes, error) {
|
||||
n, err := ParseUnit(s, bytesUnitMap)
|
||||
if err != nil {
|
||||
n, err = ParseUnit(s, oldBytesUnitMap)
|
||||
}
|
||||
return Base2Bytes(n), err
|
||||
}
|
||||
|
||||
func (b Base2Bytes) String() string {
|
||||
return ToString(int64(b), 1024, "iB", "B")
|
||||
}
|
||||
|
||||
var (
|
||||
metricBytesUnitMap = MakeUnitMap("B", "B", 1000)
|
||||
)
|
||||
|
||||
// MetricBytes are SI byte units (1000 bytes in a kilobyte).
|
||||
type MetricBytes SI
|
||||
|
||||
// SI base-10 byte units.
|
||||
const (
|
||||
Kilobyte MetricBytes = 1000
|
||||
KB = Kilobyte
|
||||
Megabyte = Kilobyte * 1000
|
||||
MB = Megabyte
|
||||
Gigabyte = Megabyte * 1000
|
||||
GB = Gigabyte
|
||||
Terabyte = Gigabyte * 1000
|
||||
TB = Terabyte
|
||||
Petabyte = Terabyte * 1000
|
||||
PB = Petabyte
|
||||
Exabyte = Petabyte * 1000
|
||||
EB = Exabyte
|
||||
)
|
||||
|
||||
// ParseMetricBytes parses base-10 metric byte units. That is, KB is 1000 bytes.
|
||||
func ParseMetricBytes(s string) (MetricBytes, error) {
|
||||
n, err := ParseUnit(s, metricBytesUnitMap)
|
||||
return MetricBytes(n), err
|
||||
}
|
||||
|
||||
func (m MetricBytes) String() string {
|
||||
return ToString(int64(m), 1000, "B", "B")
|
||||
}
|
||||
|
||||
// ParseStrictBytes supports both iB and B suffixes for base 2 and metric,
|
||||
// respectively. That is, KiB represents 1024 and KB represents 1000.
|
||||
func ParseStrictBytes(s string) (int64, error) {
|
||||
n, err := ParseUnit(s, bytesUnitMap)
|
||||
if err != nil {
|
||||
n, err = ParseUnit(s, metricBytesUnitMap)
|
||||
}
|
||||
return int64(n), err
|
||||
}
|
||||
13
vendor/github.com/alecthomas/units/doc.go
generated
vendored
Normal file
13
vendor/github.com/alecthomas/units/doc.go
generated
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
// Package units provides helpful unit multipliers and functions for Go.
|
||||
//
|
||||
// The goal of this package is to have functionality similar to the time [1] package.
|
||||
//
|
||||
//
|
||||
// [1] http://golang.org/pkg/time/
|
||||
//
|
||||
// It allows for code like this:
|
||||
//
|
||||
// n, err := ParseBase2Bytes("1KB")
|
||||
// // n == 1024
|
||||
// n = units.Mebibyte * 512
|
||||
package units
|
||||
26
vendor/github.com/alecthomas/units/si.go
generated
vendored
Normal file
26
vendor/github.com/alecthomas/units/si.go
generated
vendored
Normal file
@ -0,0 +1,26 @@
|
||||
package units
|
||||
|
||||
// SI units.
|
||||
type SI int64
|
||||
|
||||
// SI unit multiples.
|
||||
const (
|
||||
Kilo SI = 1000
|
||||
Mega = Kilo * 1000
|
||||
Giga = Mega * 1000
|
||||
Tera = Giga * 1000
|
||||
Peta = Tera * 1000
|
||||
Exa = Peta * 1000
|
||||
)
|
||||
|
||||
func MakeUnitMap(suffix, shortSuffix string, scale int64) map[string]float64 {
|
||||
return map[string]float64{
|
||||
shortSuffix: 1,
|
||||
"K" + suffix: float64(scale),
|
||||
"M" + suffix: float64(scale * scale),
|
||||
"G" + suffix: float64(scale * scale * scale),
|
||||
"T" + suffix: float64(scale * scale * scale * scale),
|
||||
"P" + suffix: float64(scale * scale * scale * scale * scale),
|
||||
"E" + suffix: float64(scale * scale * scale * scale * scale * scale),
|
||||
}
|
||||
}
|
||||
138
vendor/github.com/alecthomas/units/util.go
generated
vendored
Normal file
138
vendor/github.com/alecthomas/units/util.go
generated
vendored
Normal file
@ -0,0 +1,138 @@
|
||||
package units
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
siUnits = []string{"", "K", "M", "G", "T", "P", "E"}
|
||||
)
|
||||
|
||||
func ToString(n int64, scale int64, suffix, baseSuffix string) string {
|
||||
mn := len(siUnits)
|
||||
out := make([]string, mn)
|
||||
for i, m := range siUnits {
|
||||
if n%scale != 0 || i == 0 && n == 0 {
|
||||
s := suffix
|
||||
if i == 0 {
|
||||
s = baseSuffix
|
||||
}
|
||||
out[mn-1-i] = fmt.Sprintf("%d%s%s", n%scale, m, s)
|
||||
}
|
||||
n /= scale
|
||||
if n == 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
return strings.Join(out, "")
|
||||
}
|
||||
|
||||
// Below code ripped straight from http://golang.org/src/pkg/time/format.go?s=33392:33438#L1123
|
||||
var errLeadingInt = errors.New("units: bad [0-9]*") // never printed
|
||||
|
||||
// leadingInt consumes the leading [0-9]* from s.
|
||||
func leadingInt(s string) (x int64, rem string, err error) {
|
||||
i := 0
|
||||
for ; i < len(s); i++ {
|
||||
c := s[i]
|
||||
if c < '0' || c > '9' {
|
||||
break
|
||||
}
|
||||
if x >= (1<<63-10)/10 {
|
||||
// overflow
|
||||
return 0, "", errLeadingInt
|
||||
}
|
||||
x = x*10 + int64(c) - '0'
|
||||
}
|
||||
return x, s[i:], nil
|
||||
}
|
||||
|
||||
func ParseUnit(s string, unitMap map[string]float64) (int64, error) {
|
||||
// [-+]?([0-9]*(\.[0-9]*)?[a-z]+)+
|
||||
orig := s
|
||||
f := float64(0)
|
||||
neg := false
|
||||
|
||||
// Consume [-+]?
|
||||
if s != "" {
|
||||
c := s[0]
|
||||
if c == '-' || c == '+' {
|
||||
neg = c == '-'
|
||||
s = s[1:]
|
||||
}
|
||||
}
|
||||
// Special case: if all that is left is "0", this is zero.
|
||||
if s == "0" {
|
||||
return 0, nil
|
||||
}
|
||||
if s == "" {
|
||||
return 0, errors.New("units: invalid " + orig)
|
||||
}
|
||||
for s != "" {
|
||||
g := float64(0) // this element of the sequence
|
||||
|
||||
var x int64
|
||||
var err error
|
||||
|
||||
// The next character must be [0-9.]
|
||||
if !(s[0] == '.' || ('0' <= s[0] && s[0] <= '9')) {
|
||||
return 0, errors.New("units: invalid " + orig)
|
||||
}
|
||||
// Consume [0-9]*
|
||||
pl := len(s)
|
||||
x, s, err = leadingInt(s)
|
||||
if err != nil {
|
||||
return 0, errors.New("units: invalid " + orig)
|
||||
}
|
||||
g = float64(x)
|
||||
pre := pl != len(s) // whether we consumed anything before a period
|
||||
|
||||
// Consume (\.[0-9]*)?
|
||||
post := false
|
||||
if s != "" && s[0] == '.' {
|
||||
s = s[1:]
|
||||
pl := len(s)
|
||||
x, s, err = leadingInt(s)
|
||||
if err != nil {
|
||||
return 0, errors.New("units: invalid " + orig)
|
||||
}
|
||||
scale := 1.0
|
||||
for n := pl - len(s); n > 0; n-- {
|
||||
scale *= 10
|
||||
}
|
||||
g += float64(x) / scale
|
||||
post = pl != len(s)
|
||||
}
|
||||
if !pre && !post {
|
||||
// no digits (e.g. ".s" or "-.s")
|
||||
return 0, errors.New("units: invalid " + orig)
|
||||
}
|
||||
|
||||
// Consume unit.
|
||||
i := 0
|
||||
for ; i < len(s); i++ {
|
||||
c := s[i]
|
||||
if c == '.' || ('0' <= c && c <= '9') {
|
||||
break
|
||||
}
|
||||
}
|
||||
u := s[:i]
|
||||
s = s[i:]
|
||||
unit, ok := unitMap[u]
|
||||
if !ok {
|
||||
return 0, errors.New("units: unknown unit " + u + " in " + orig)
|
||||
}
|
||||
|
||||
f += g * unit
|
||||
}
|
||||
|
||||
if neg {
|
||||
f = -f
|
||||
}
|
||||
if f < float64(-1<<63) || f > float64(1<<63-1) {
|
||||
return 0, errors.New("units: overflow parsing unit")
|
||||
}
|
||||
return int64(f), nil
|
||||
}
|
||||
15
vendor/github.com/golang/snappy/AUTHORS
generated
vendored
Normal file
15
vendor/github.com/golang/snappy/AUTHORS
generated
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
# This is the official list of Snappy-Go authors for copyright purposes.
|
||||
# This file is distinct from the CONTRIBUTORS files.
|
||||
# See the latter for an explanation.
|
||||
|
||||
# Names should be added to this file as
|
||||
# Name or Organization <email address>
|
||||
# The email address is not required for organizations.
|
||||
|
||||
# Please keep the list sorted.
|
||||
|
||||
Damian Gryski <dgryski@gmail.com>
|
||||
Google Inc.
|
||||
Jan Mercl <0xjnml@gmail.com>
|
||||
Rodolfo Carvalho <rhcarvalho@gmail.com>
|
||||
Sebastien Binet <seb.binet@gmail.com>
|
||||
37
vendor/github.com/golang/snappy/CONTRIBUTORS
generated
vendored
Normal file
37
vendor/github.com/golang/snappy/CONTRIBUTORS
generated
vendored
Normal file
@ -0,0 +1,37 @@
|
||||
# This is the official list of people who can contribute
|
||||
# (and typically have contributed) code to the Snappy-Go repository.
|
||||
# The AUTHORS file lists the copyright holders; this file
|
||||
# lists people. For example, Google employees are listed here
|
||||
# but not in AUTHORS, because Google holds the copyright.
|
||||
#
|
||||
# The submission process automatically checks to make sure
|
||||
# that people submitting code are listed in this file (by email address).
|
||||
#
|
||||
# Names should be added to this file only after verifying that
|
||||
# the individual or the individual's organization has agreed to
|
||||
# the appropriate Contributor License Agreement, found here:
|
||||
#
|
||||
# http://code.google.com/legal/individual-cla-v1.0.html
|
||||
# http://code.google.com/legal/corporate-cla-v1.0.html
|
||||
#
|
||||
# The agreement for individuals can be filled out on the web.
|
||||
#
|
||||
# When adding J Random Contributor's name to this file,
|
||||
# either J's name or J's organization's name should be
|
||||
# added to the AUTHORS file, depending on whether the
|
||||
# individual or corporate CLA was used.
|
||||
|
||||
# Names should be added to this file like so:
|
||||
# Name <email address>
|
||||
|
||||
# Please keep the list sorted.
|
||||
|
||||
Damian Gryski <dgryski@gmail.com>
|
||||
Jan Mercl <0xjnml@gmail.com>
|
||||
Kai Backman <kaib@golang.org>
|
||||
Marc-Antoine Ruel <maruel@chromium.org>
|
||||
Nigel Tao <nigeltao@golang.org>
|
||||
Rob Pike <r@golang.org>
|
||||
Rodolfo Carvalho <rhcarvalho@gmail.com>
|
||||
Russ Cox <rsc@golang.org>
|
||||
Sebastien Binet <seb.binet@gmail.com>
|
||||
27
vendor/github.com/golang/snappy/LICENSE
generated
vendored
Normal file
27
vendor/github.com/golang/snappy/LICENSE
generated
vendored
Normal file
@ -0,0 +1,27 @@
|
||||
Copyright (c) 2011 The Snappy-Go Authors. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
107
vendor/github.com/golang/snappy/README
generated
vendored
Normal file
107
vendor/github.com/golang/snappy/README
generated
vendored
Normal file
@ -0,0 +1,107 @@
|
||||
The Snappy compression format in the Go programming language.
|
||||
|
||||
To download and install from source:
|
||||
$ go get github.com/golang/snappy
|
||||
|
||||
Unless otherwise noted, the Snappy-Go source files are distributed
|
||||
under the BSD-style license found in the LICENSE file.
|
||||
|
||||
|
||||
|
||||
Benchmarks.
|
||||
|
||||
The golang/snappy benchmarks include compressing (Z) and decompressing (U) ten
|
||||
or so files, the same set used by the C++ Snappy code (github.com/google/snappy
|
||||
and note the "google", not "golang"). On an "Intel(R) Core(TM) i7-3770 CPU @
|
||||
3.40GHz", Go's GOARCH=amd64 numbers as of 2016-05-29:
|
||||
|
||||
"go test -test.bench=."
|
||||
|
||||
_UFlat0-8 2.19GB/s ± 0% html
|
||||
_UFlat1-8 1.41GB/s ± 0% urls
|
||||
_UFlat2-8 23.5GB/s ± 2% jpg
|
||||
_UFlat3-8 1.91GB/s ± 0% jpg_200
|
||||
_UFlat4-8 14.0GB/s ± 1% pdf
|
||||
_UFlat5-8 1.97GB/s ± 0% html4
|
||||
_UFlat6-8 814MB/s ± 0% txt1
|
||||
_UFlat7-8 785MB/s ± 0% txt2
|
||||
_UFlat8-8 857MB/s ± 0% txt3
|
||||
_UFlat9-8 719MB/s ± 1% txt4
|
||||
_UFlat10-8 2.84GB/s ± 0% pb
|
||||
_UFlat11-8 1.05GB/s ± 0% gaviota
|
||||
|
||||
_ZFlat0-8 1.04GB/s ± 0% html
|
||||
_ZFlat1-8 534MB/s ± 0% urls
|
||||
_ZFlat2-8 15.7GB/s ± 1% jpg
|
||||
_ZFlat3-8 740MB/s ± 3% jpg_200
|
||||
_ZFlat4-8 9.20GB/s ± 1% pdf
|
||||
_ZFlat5-8 991MB/s ± 0% html4
|
||||
_ZFlat6-8 379MB/s ± 0% txt1
|
||||
_ZFlat7-8 352MB/s ± 0% txt2
|
||||
_ZFlat8-8 396MB/s ± 1% txt3
|
||||
_ZFlat9-8 327MB/s ± 1% txt4
|
||||
_ZFlat10-8 1.33GB/s ± 1% pb
|
||||
_ZFlat11-8 605MB/s ± 1% gaviota
|
||||
|
||||
|
||||
|
||||
"go test -test.bench=. -tags=noasm"
|
||||
|
||||
_UFlat0-8 621MB/s ± 2% html
|
||||
_UFlat1-8 494MB/s ± 1% urls
|
||||
_UFlat2-8 23.2GB/s ± 1% jpg
|
||||
_UFlat3-8 1.12GB/s ± 1% jpg_200
|
||||
_UFlat4-8 4.35GB/s ± 1% pdf
|
||||
_UFlat5-8 609MB/s ± 0% html4
|
||||
_UFlat6-8 296MB/s ± 0% txt1
|
||||
_UFlat7-8 288MB/s ± 0% txt2
|
||||
_UFlat8-8 309MB/s ± 1% txt3
|
||||
_UFlat9-8 280MB/s ± 1% txt4
|
||||
_UFlat10-8 753MB/s ± 0% pb
|
||||
_UFlat11-8 400MB/s ± 0% gaviota
|
||||
|
||||
_ZFlat0-8 409MB/s ± 1% html
|
||||
_ZFlat1-8 250MB/s ± 1% urls
|
||||
_ZFlat2-8 12.3GB/s ± 1% jpg
|
||||
_ZFlat3-8 132MB/s ± 0% jpg_200
|
||||
_ZFlat4-8 2.92GB/s ± 0% pdf
|
||||
_ZFlat5-8 405MB/s ± 1% html4
|
||||
_ZFlat6-8 179MB/s ± 1% txt1
|
||||
_ZFlat7-8 170MB/s ± 1% txt2
|
||||
_ZFlat8-8 189MB/s ± 1% txt3
|
||||
_ZFlat9-8 164MB/s ± 1% txt4
|
||||
_ZFlat10-8 479MB/s ± 1% pb
|
||||
_ZFlat11-8 270MB/s ± 1% gaviota
|
||||
|
||||
|
||||
|
||||
For comparison (Go's encoded output is byte-for-byte identical to C++'s), here
|
||||
are the numbers from C++ Snappy's
|
||||
|
||||
make CXXFLAGS="-O2 -DNDEBUG -g" clean snappy_unittest.log && cat snappy_unittest.log
|
||||
|
||||
BM_UFlat/0 2.4GB/s html
|
||||
BM_UFlat/1 1.4GB/s urls
|
||||
BM_UFlat/2 21.8GB/s jpg
|
||||
BM_UFlat/3 1.5GB/s jpg_200
|
||||
BM_UFlat/4 13.3GB/s pdf
|
||||
BM_UFlat/5 2.1GB/s html4
|
||||
BM_UFlat/6 1.0GB/s txt1
|
||||
BM_UFlat/7 959.4MB/s txt2
|
||||
BM_UFlat/8 1.0GB/s txt3
|
||||
BM_UFlat/9 864.5MB/s txt4
|
||||
BM_UFlat/10 2.9GB/s pb
|
||||
BM_UFlat/11 1.2GB/s gaviota
|
||||
|
||||
BM_ZFlat/0 944.3MB/s html (22.31 %)
|
||||
BM_ZFlat/1 501.6MB/s urls (47.78 %)
|
||||
BM_ZFlat/2 14.3GB/s jpg (99.95 %)
|
||||
BM_ZFlat/3 538.3MB/s jpg_200 (73.00 %)
|
||||
BM_ZFlat/4 8.3GB/s pdf (83.30 %)
|
||||
BM_ZFlat/5 903.5MB/s html4 (22.52 %)
|
||||
BM_ZFlat/6 336.0MB/s txt1 (57.88 %)
|
||||
BM_ZFlat/7 312.3MB/s txt2 (61.91 %)
|
||||
BM_ZFlat/8 353.1MB/s txt3 (54.99 %)
|
||||
BM_ZFlat/9 289.9MB/s txt4 (66.26 %)
|
||||
BM_ZFlat/10 1.2GB/s pb (19.68 %)
|
||||
BM_ZFlat/11 527.4MB/s gaviota (37.72 %)
|
||||
237
vendor/github.com/golang/snappy/decode.go
generated
vendored
Normal file
237
vendor/github.com/golang/snappy/decode.go
generated
vendored
Normal file
@ -0,0 +1,237 @@
|
||||
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package snappy
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrCorrupt reports that the input is invalid.
|
||||
ErrCorrupt = errors.New("snappy: corrupt input")
|
||||
// ErrTooLarge reports that the uncompressed length is too large.
|
||||
ErrTooLarge = errors.New("snappy: decoded block is too large")
|
||||
// ErrUnsupported reports that the input isn't supported.
|
||||
ErrUnsupported = errors.New("snappy: unsupported input")
|
||||
|
||||
errUnsupportedLiteralLength = errors.New("snappy: unsupported literal length")
|
||||
)
|
||||
|
||||
// DecodedLen returns the length of the decoded block.
|
||||
func DecodedLen(src []byte) (int, error) {
|
||||
v, _, err := decodedLen(src)
|
||||
return v, err
|
||||
}
|
||||
|
||||
// decodedLen returns the length of the decoded block and the number of bytes
|
||||
// that the length header occupied.
|
||||
func decodedLen(src []byte) (blockLen, headerLen int, err error) {
|
||||
v, n := binary.Uvarint(src)
|
||||
if n <= 0 || v > 0xffffffff {
|
||||
return 0, 0, ErrCorrupt
|
||||
}
|
||||
|
||||
const wordSize = 32 << (^uint(0) >> 32 & 1)
|
||||
if wordSize == 32 && v > 0x7fffffff {
|
||||
return 0, 0, ErrTooLarge
|
||||
}
|
||||
return int(v), n, nil
|
||||
}
|
||||
|
||||
const (
|
||||
decodeErrCodeCorrupt = 1
|
||||
decodeErrCodeUnsupportedLiteralLength = 2
|
||||
)
|
||||
|
||||
// Decode returns the decoded form of src. The returned slice may be a sub-
|
||||
// slice of dst if dst was large enough to hold the entire decoded block.
|
||||
// Otherwise, a newly allocated slice will be returned.
|
||||
//
|
||||
// The dst and src must not overlap. It is valid to pass a nil dst.
|
||||
func Decode(dst, src []byte) ([]byte, error) {
|
||||
dLen, s, err := decodedLen(src)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if dLen <= len(dst) {
|
||||
dst = dst[:dLen]
|
||||
} else {
|
||||
dst = make([]byte, dLen)
|
||||
}
|
||||
switch decode(dst, src[s:]) {
|
||||
case 0:
|
||||
return dst, nil
|
||||
case decodeErrCodeUnsupportedLiteralLength:
|
||||
return nil, errUnsupportedLiteralLength
|
||||
}
|
||||
return nil, ErrCorrupt
|
||||
}
|
||||
|
||||
// NewReader returns a new Reader that decompresses from r, using the framing
|
||||
// format described at
|
||||
// https://github.com/google/snappy/blob/master/framing_format.txt
|
||||
func NewReader(r io.Reader) *Reader {
|
||||
return &Reader{
|
||||
r: r,
|
||||
decoded: make([]byte, maxBlockSize),
|
||||
buf: make([]byte, maxEncodedLenOfMaxBlockSize+checksumSize),
|
||||
}
|
||||
}
|
||||
|
||||
// Reader is an io.Reader that can read Snappy-compressed bytes.
|
||||
type Reader struct {
|
||||
r io.Reader
|
||||
err error
|
||||
decoded []byte
|
||||
buf []byte
|
||||
// decoded[i:j] contains decoded bytes that have not yet been passed on.
|
||||
i, j int
|
||||
readHeader bool
|
||||
}
|
||||
|
||||
// Reset discards any buffered data, resets all state, and switches the Snappy
|
||||
// reader to read from r. This permits reusing a Reader rather than allocating
|
||||
// a new one.
|
||||
func (r *Reader) Reset(reader io.Reader) {
|
||||
r.r = reader
|
||||
r.err = nil
|
||||
r.i = 0
|
||||
r.j = 0
|
||||
r.readHeader = false
|
||||
}
|
||||
|
||||
func (r *Reader) readFull(p []byte, allowEOF bool) (ok bool) {
|
||||
if _, r.err = io.ReadFull(r.r, p); r.err != nil {
|
||||
if r.err == io.ErrUnexpectedEOF || (r.err == io.EOF && !allowEOF) {
|
||||
r.err = ErrCorrupt
|
||||
}
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Read satisfies the io.Reader interface.
|
||||
func (r *Reader) Read(p []byte) (int, error) {
|
||||
if r.err != nil {
|
||||
return 0, r.err
|
||||
}
|
||||
for {
|
||||
if r.i < r.j {
|
||||
n := copy(p, r.decoded[r.i:r.j])
|
||||
r.i += n
|
||||
return n, nil
|
||||
}
|
||||
if !r.readFull(r.buf[:4], true) {
|
||||
return 0, r.err
|
||||
}
|
||||
chunkType := r.buf[0]
|
||||
if !r.readHeader {
|
||||
if chunkType != chunkTypeStreamIdentifier {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
r.readHeader = true
|
||||
}
|
||||
chunkLen := int(r.buf[1]) | int(r.buf[2])<<8 | int(r.buf[3])<<16
|
||||
if chunkLen > len(r.buf) {
|
||||
r.err = ErrUnsupported
|
||||
return 0, r.err
|
||||
}
|
||||
|
||||
// The chunk types are specified at
|
||||
// https://github.com/google/snappy/blob/master/framing_format.txt
|
||||
switch chunkType {
|
||||
case chunkTypeCompressedData:
|
||||
// Section 4.2. Compressed data (chunk type 0x00).
|
||||
if chunkLen < checksumSize {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
buf := r.buf[:chunkLen]
|
||||
if !r.readFull(buf, false) {
|
||||
return 0, r.err
|
||||
}
|
||||
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
|
||||
buf = buf[checksumSize:]
|
||||
|
||||
n, err := DecodedLen(buf)
|
||||
if err != nil {
|
||||
r.err = err
|
||||
return 0, r.err
|
||||
}
|
||||
if n > len(r.decoded) {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
if _, err := Decode(r.decoded, buf); err != nil {
|
||||
r.err = err
|
||||
return 0, r.err
|
||||
}
|
||||
if crc(r.decoded[:n]) != checksum {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
r.i, r.j = 0, n
|
||||
continue
|
||||
|
||||
case chunkTypeUncompressedData:
|
||||
// Section 4.3. Uncompressed data (chunk type 0x01).
|
||||
if chunkLen < checksumSize {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
buf := r.buf[:checksumSize]
|
||||
if !r.readFull(buf, false) {
|
||||
return 0, r.err
|
||||
}
|
||||
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
|
||||
// Read directly into r.decoded instead of via r.buf.
|
||||
n := chunkLen - checksumSize
|
||||
if n > len(r.decoded) {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
if !r.readFull(r.decoded[:n], false) {
|
||||
return 0, r.err
|
||||
}
|
||||
if crc(r.decoded[:n]) != checksum {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
r.i, r.j = 0, n
|
||||
continue
|
||||
|
||||
case chunkTypeStreamIdentifier:
|
||||
// Section 4.1. Stream identifier (chunk type 0xff).
|
||||
if chunkLen != len(magicBody) {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
if !r.readFull(r.buf[:len(magicBody)], false) {
|
||||
return 0, r.err
|
||||
}
|
||||
for i := 0; i < len(magicBody); i++ {
|
||||
if r.buf[i] != magicBody[i] {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if chunkType <= 0x7f {
|
||||
// Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f).
|
||||
r.err = ErrUnsupported
|
||||
return 0, r.err
|
||||
}
|
||||
// Section 4.4 Padding (chunk type 0xfe).
|
||||
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
|
||||
if !r.readFull(r.buf[:chunkLen], false) {
|
||||
return 0, r.err
|
||||
}
|
||||
}
|
||||
}
|
||||
14
vendor/github.com/golang/snappy/decode_amd64.go
generated
vendored
Normal file
14
vendor/github.com/golang/snappy/decode_amd64.go
generated
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !appengine
|
||||
// +build gc
|
||||
// +build !noasm
|
||||
|
||||
package snappy
|
||||
|
||||
// decode has the same semantics as in decode_other.go.
|
||||
//
|
||||
//go:noescape
|
||||
func decode(dst, src []byte) int
|
||||
490
vendor/github.com/golang/snappy/decode_amd64.s
generated
vendored
Normal file
490
vendor/github.com/golang/snappy/decode_amd64.s
generated
vendored
Normal file
@ -0,0 +1,490 @@
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !appengine
|
||||
// +build gc
|
||||
// +build !noasm
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// The asm code generally follows the pure Go code in decode_other.go, except
|
||||
// where marked with a "!!!".
|
||||
|
||||
// func decode(dst, src []byte) int
|
||||
//
|
||||
// All local variables fit into registers. The non-zero stack size is only to
|
||||
// spill registers and push args when issuing a CALL. The register allocation:
|
||||
// - AX scratch
|
||||
// - BX scratch
|
||||
// - CX length or x
|
||||
// - DX offset
|
||||
// - SI &src[s]
|
||||
// - DI &dst[d]
|
||||
// + R8 dst_base
|
||||
// + R9 dst_len
|
||||
// + R10 dst_base + dst_len
|
||||
// + R11 src_base
|
||||
// + R12 src_len
|
||||
// + R13 src_base + src_len
|
||||
// - R14 used by doCopy
|
||||
// - R15 used by doCopy
|
||||
//
|
||||
// The registers R8-R13 (marked with a "+") are set at the start of the
|
||||
// function, and after a CALL returns, and are not otherwise modified.
|
||||
//
|
||||
// The d variable is implicitly DI - R8, and len(dst)-d is R10 - DI.
|
||||
// The s variable is implicitly SI - R11, and len(src)-s is R13 - SI.
|
||||
TEXT ·decode(SB), NOSPLIT, $48-56
|
||||
// Initialize SI, DI and R8-R13.
|
||||
MOVQ dst_base+0(FP), R8
|
||||
MOVQ dst_len+8(FP), R9
|
||||
MOVQ R8, DI
|
||||
MOVQ R8, R10
|
||||
ADDQ R9, R10
|
||||
MOVQ src_base+24(FP), R11
|
||||
MOVQ src_len+32(FP), R12
|
||||
MOVQ R11, SI
|
||||
MOVQ R11, R13
|
||||
ADDQ R12, R13
|
||||
|
||||
loop:
|
||||
// for s < len(src)
|
||||
CMPQ SI, R13
|
||||
JEQ end
|
||||
|
||||
// CX = uint32(src[s])
|
||||
//
|
||||
// switch src[s] & 0x03
|
||||
MOVBLZX (SI), CX
|
||||
MOVL CX, BX
|
||||
ANDL $3, BX
|
||||
CMPL BX, $1
|
||||
JAE tagCopy
|
||||
|
||||
// ----------------------------------------
|
||||
// The code below handles literal tags.
|
||||
|
||||
// case tagLiteral:
|
||||
// x := uint32(src[s] >> 2)
|
||||
// switch
|
||||
SHRL $2, CX
|
||||
CMPL CX, $60
|
||||
JAE tagLit60Plus
|
||||
|
||||
// case x < 60:
|
||||
// s++
|
||||
INCQ SI
|
||||
|
||||
doLit:
|
||||
// This is the end of the inner "switch", when we have a literal tag.
|
||||
//
|
||||
// We assume that CX == x and x fits in a uint32, where x is the variable
|
||||
// used in the pure Go decode_other.go code.
|
||||
|
||||
// length = int(x) + 1
|
||||
//
|
||||
// Unlike the pure Go code, we don't need to check if length <= 0 because
|
||||
// CX can hold 64 bits, so the increment cannot overflow.
|
||||
INCQ CX
|
||||
|
||||
// Prepare to check if copying length bytes will run past the end of dst or
|
||||
// src.
|
||||
//
|
||||
// AX = len(dst) - d
|
||||
// BX = len(src) - s
|
||||
MOVQ R10, AX
|
||||
SUBQ DI, AX
|
||||
MOVQ R13, BX
|
||||
SUBQ SI, BX
|
||||
|
||||
// !!! Try a faster technique for short (16 or fewer bytes) copies.
|
||||
//
|
||||
// if length > 16 || len(dst)-d < 16 || len(src)-s < 16 {
|
||||
// goto callMemmove // Fall back on calling runtime·memmove.
|
||||
// }
|
||||
//
|
||||
// The C++ snappy code calls this TryFastAppend. It also checks len(src)-s
|
||||
// against 21 instead of 16, because it cannot assume that all of its input
|
||||
// is contiguous in memory and so it needs to leave enough source bytes to
|
||||
// read the next tag without refilling buffers, but Go's Decode assumes
|
||||
// contiguousness (the src argument is a []byte).
|
||||
CMPQ CX, $16
|
||||
JGT callMemmove
|
||||
CMPQ AX, $16
|
||||
JLT callMemmove
|
||||
CMPQ BX, $16
|
||||
JLT callMemmove
|
||||
|
||||
// !!! Implement the copy from src to dst as a 16-byte load and store.
|
||||
// (Decode's documentation says that dst and src must not overlap.)
|
||||
//
|
||||
// This always copies 16 bytes, instead of only length bytes, but that's
|
||||
// OK. If the input is a valid Snappy encoding then subsequent iterations
|
||||
// will fix up the overrun. Otherwise, Decode returns a nil []byte (and a
|
||||
// non-nil error), so the overrun will be ignored.
|
||||
//
|
||||
// Note that on amd64, it is legal and cheap to issue unaligned 8-byte or
|
||||
// 16-byte loads and stores. This technique probably wouldn't be as
|
||||
// effective on architectures that are fussier about alignment.
|
||||
MOVOU 0(SI), X0
|
||||
MOVOU X0, 0(DI)
|
||||
|
||||
// d += length
|
||||
// s += length
|
||||
ADDQ CX, DI
|
||||
ADDQ CX, SI
|
||||
JMP loop
|
||||
|
||||
callMemmove:
|
||||
// if length > len(dst)-d || length > len(src)-s { etc }
|
||||
CMPQ CX, AX
|
||||
JGT errCorrupt
|
||||
CMPQ CX, BX
|
||||
JGT errCorrupt
|
||||
|
||||
// copy(dst[d:], src[s:s+length])
|
||||
//
|
||||
// This means calling runtime·memmove(&dst[d], &src[s], length), so we push
|
||||
// DI, SI and CX as arguments. Coincidentally, we also need to spill those
|
||||
// three registers to the stack, to save local variables across the CALL.
|
||||
MOVQ DI, 0(SP)
|
||||
MOVQ SI, 8(SP)
|
||||
MOVQ CX, 16(SP)
|
||||
MOVQ DI, 24(SP)
|
||||
MOVQ SI, 32(SP)
|
||||
MOVQ CX, 40(SP)
|
||||
CALL runtime·memmove(SB)
|
||||
|
||||
// Restore local variables: unspill registers from the stack and
|
||||
// re-calculate R8-R13.
|
||||
MOVQ 24(SP), DI
|
||||
MOVQ 32(SP), SI
|
||||
MOVQ 40(SP), CX
|
||||
MOVQ dst_base+0(FP), R8
|
||||
MOVQ dst_len+8(FP), R9
|
||||
MOVQ R8, R10
|
||||
ADDQ R9, R10
|
||||
MOVQ src_base+24(FP), R11
|
||||
MOVQ src_len+32(FP), R12
|
||||
MOVQ R11, R13
|
||||
ADDQ R12, R13
|
||||
|
||||
// d += length
|
||||
// s += length
|
||||
ADDQ CX, DI
|
||||
ADDQ CX, SI
|
||||
JMP loop
|
||||
|
||||
tagLit60Plus:
|
||||
// !!! This fragment does the
|
||||
//
|
||||
// s += x - 58; if uint(s) > uint(len(src)) { etc }
|
||||
//
|
||||
// checks. In the asm version, we code it once instead of once per switch case.
|
||||
ADDQ CX, SI
|
||||
SUBQ $58, SI
|
||||
MOVQ SI, BX
|
||||
SUBQ R11, BX
|
||||
CMPQ BX, R12
|
||||
JA errCorrupt
|
||||
|
||||
// case x == 60:
|
||||
CMPL CX, $61
|
||||
JEQ tagLit61
|
||||
JA tagLit62Plus
|
||||
|
||||
// x = uint32(src[s-1])
|
||||
MOVBLZX -1(SI), CX
|
||||
JMP doLit
|
||||
|
||||
tagLit61:
|
||||
// case x == 61:
|
||||
// x = uint32(src[s-2]) | uint32(src[s-1])<<8
|
||||
MOVWLZX -2(SI), CX
|
||||
JMP doLit
|
||||
|
||||
tagLit62Plus:
|
||||
CMPL CX, $62
|
||||
JA tagLit63
|
||||
|
||||
// case x == 62:
|
||||
// x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16
|
||||
MOVWLZX -3(SI), CX
|
||||
MOVBLZX -1(SI), BX
|
||||
SHLL $16, BX
|
||||
ORL BX, CX
|
||||
JMP doLit
|
||||
|
||||
tagLit63:
|
||||
// case x == 63:
|
||||
// x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24
|
||||
MOVL -4(SI), CX
|
||||
JMP doLit
|
||||
|
||||
// The code above handles literal tags.
|
||||
// ----------------------------------------
|
||||
// The code below handles copy tags.
|
||||
|
||||
tagCopy4:
|
||||
// case tagCopy4:
|
||||
// s += 5
|
||||
ADDQ $5, SI
|
||||
|
||||
// if uint(s) > uint(len(src)) { etc }
|
||||
MOVQ SI, BX
|
||||
SUBQ R11, BX
|
||||
CMPQ BX, R12
|
||||
JA errCorrupt
|
||||
|
||||
// length = 1 + int(src[s-5])>>2
|
||||
SHRQ $2, CX
|
||||
INCQ CX
|
||||
|
||||
// offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24)
|
||||
MOVLQZX -4(SI), DX
|
||||
JMP doCopy
|
||||
|
||||
tagCopy2:
|
||||
// case tagCopy2:
|
||||
// s += 3
|
||||
ADDQ $3, SI
|
||||
|
||||
// if uint(s) > uint(len(src)) { etc }
|
||||
MOVQ SI, BX
|
||||
SUBQ R11, BX
|
||||
CMPQ BX, R12
|
||||
JA errCorrupt
|
||||
|
||||
// length = 1 + int(src[s-3])>>2
|
||||
SHRQ $2, CX
|
||||
INCQ CX
|
||||
|
||||
// offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8)
|
||||
MOVWQZX -2(SI), DX
|
||||
JMP doCopy
|
||||
|
||||
tagCopy:
|
||||
// We have a copy tag. We assume that:
|
||||
// - BX == src[s] & 0x03
|
||||
// - CX == src[s]
|
||||
CMPQ BX, $2
|
||||
JEQ tagCopy2
|
||||
JA tagCopy4
|
||||
|
||||
// case tagCopy1:
|
||||
// s += 2
|
||||
ADDQ $2, SI
|
||||
|
||||
// if uint(s) > uint(len(src)) { etc }
|
||||
MOVQ SI, BX
|
||||
SUBQ R11, BX
|
||||
CMPQ BX, R12
|
||||
JA errCorrupt
|
||||
|
||||
// offset = int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1]))
|
||||
MOVQ CX, DX
|
||||
ANDQ $0xe0, DX
|
||||
SHLQ $3, DX
|
||||
MOVBQZX -1(SI), BX
|
||||
ORQ BX, DX
|
||||
|
||||
// length = 4 + int(src[s-2])>>2&0x7
|
||||
SHRQ $2, CX
|
||||
ANDQ $7, CX
|
||||
ADDQ $4, CX
|
||||
|
||||
doCopy:
|
||||
// This is the end of the outer "switch", when we have a copy tag.
|
||||
//
|
||||
// We assume that:
|
||||
// - CX == length && CX > 0
|
||||
// - DX == offset
|
||||
|
||||
// if offset <= 0 { etc }
|
||||
CMPQ DX, $0
|
||||
JLE errCorrupt
|
||||
|
||||
// if d < offset { etc }
|
||||
MOVQ DI, BX
|
||||
SUBQ R8, BX
|
||||
CMPQ BX, DX
|
||||
JLT errCorrupt
|
||||
|
||||
// if length > len(dst)-d { etc }
|
||||
MOVQ R10, BX
|
||||
SUBQ DI, BX
|
||||
CMPQ CX, BX
|
||||
JGT errCorrupt
|
||||
|
||||
// forwardCopy(dst[d:d+length], dst[d-offset:]); d += length
|
||||
//
|
||||
// Set:
|
||||
// - R14 = len(dst)-d
|
||||
// - R15 = &dst[d-offset]
|
||||
MOVQ R10, R14
|
||||
SUBQ DI, R14
|
||||
MOVQ DI, R15
|
||||
SUBQ DX, R15
|
||||
|
||||
// !!! Try a faster technique for short (16 or fewer bytes) forward copies.
|
||||
//
|
||||
// First, try using two 8-byte load/stores, similar to the doLit technique
|
||||
// above. Even if dst[d:d+length] and dst[d-offset:] can overlap, this is
|
||||
// still OK if offset >= 8. Note that this has to be two 8-byte load/stores
|
||||
// and not one 16-byte load/store, and the first store has to be before the
|
||||
// second load, due to the overlap if offset is in the range [8, 16).
|
||||
//
|
||||
// if length > 16 || offset < 8 || len(dst)-d < 16 {
|
||||
// goto slowForwardCopy
|
||||
// }
|
||||
// copy 16 bytes
|
||||
// d += length
|
||||
CMPQ CX, $16
|
||||
JGT slowForwardCopy
|
||||
CMPQ DX, $8
|
||||
JLT slowForwardCopy
|
||||
CMPQ R14, $16
|
||||
JLT slowForwardCopy
|
||||
MOVQ 0(R15), AX
|
||||
MOVQ AX, 0(DI)
|
||||
MOVQ 8(R15), BX
|
||||
MOVQ BX, 8(DI)
|
||||
ADDQ CX, DI
|
||||
JMP loop
|
||||
|
||||
slowForwardCopy:
|
||||
// !!! If the forward copy is longer than 16 bytes, or if offset < 8, we
|
||||
// can still try 8-byte load stores, provided we can overrun up to 10 extra
|
||||
// bytes. As above, the overrun will be fixed up by subsequent iterations
|
||||
// of the outermost loop.
|
||||
//
|
||||
// The C++ snappy code calls this technique IncrementalCopyFastPath. Its
|
||||
// commentary says:
|
||||
//
|
||||
// ----
|
||||
//
|
||||
// The main part of this loop is a simple copy of eight bytes at a time
|
||||
// until we've copied (at least) the requested amount of bytes. However,
|
||||
// if d and d-offset are less than eight bytes apart (indicating a
|
||||
// repeating pattern of length < 8), we first need to expand the pattern in
|
||||
// order to get the correct results. For instance, if the buffer looks like
|
||||
// this, with the eight-byte <d-offset> and <d> patterns marked as
|
||||
// intervals:
|
||||
//
|
||||
// abxxxxxxxxxxxx
|
||||
// [------] d-offset
|
||||
// [------] d
|
||||
//
|
||||
// a single eight-byte copy from <d-offset> to <d> will repeat the pattern
|
||||
// once, after which we can move <d> two bytes without moving <d-offset>:
|
||||
//
|
||||
// ababxxxxxxxxxx
|
||||
// [------] d-offset
|
||||
// [------] d
|
||||
//
|
||||
// and repeat the exercise until the two no longer overlap.
|
||||
//
|
||||
// This allows us to do very well in the special case of one single byte
|
||||
// repeated many times, without taking a big hit for more general cases.
|
||||
//
|
||||
// The worst case of extra writing past the end of the match occurs when
|
||||
// offset == 1 and length == 1; the last copy will read from byte positions
|
||||
// [0..7] and write to [4..11], whereas it was only supposed to write to
|
||||
// position 1. Thus, ten excess bytes.
|
||||
//
|
||||
// ----
|
||||
//
|
||||
// That "10 byte overrun" worst case is confirmed by Go's
|
||||
// TestSlowForwardCopyOverrun, which also tests the fixUpSlowForwardCopy
|
||||
// and finishSlowForwardCopy algorithm.
|
||||
//
|
||||
// if length > len(dst)-d-10 {
|
||||
// goto verySlowForwardCopy
|
||||
// }
|
||||
SUBQ $10, R14
|
||||
CMPQ CX, R14
|
||||
JGT verySlowForwardCopy
|
||||
|
||||
makeOffsetAtLeast8:
|
||||
// !!! As above, expand the pattern so that offset >= 8 and we can use
|
||||
// 8-byte load/stores.
|
||||
//
|
||||
// for offset < 8 {
|
||||
// copy 8 bytes from dst[d-offset:] to dst[d:]
|
||||
// length -= offset
|
||||
// d += offset
|
||||
// offset += offset
|
||||
// // The two previous lines together means that d-offset, and therefore
|
||||
// // R15, is unchanged.
|
||||
// }
|
||||
CMPQ DX, $8
|
||||
JGE fixUpSlowForwardCopy
|
||||
MOVQ (R15), BX
|
||||
MOVQ BX, (DI)
|
||||
SUBQ DX, CX
|
||||
ADDQ DX, DI
|
||||
ADDQ DX, DX
|
||||
JMP makeOffsetAtLeast8
|
||||
|
||||
fixUpSlowForwardCopy:
|
||||
// !!! Add length (which might be negative now) to d (implied by DI being
|
||||
// &dst[d]) so that d ends up at the right place when we jump back to the
|
||||
// top of the loop. Before we do that, though, we save DI to AX so that, if
|
||||
// length is positive, copying the remaining length bytes will write to the
|
||||
// right place.
|
||||
MOVQ DI, AX
|
||||
ADDQ CX, DI
|
||||
|
||||
finishSlowForwardCopy:
|
||||
// !!! Repeat 8-byte load/stores until length <= 0. Ending with a negative
|
||||
// length means that we overrun, but as above, that will be fixed up by
|
||||
// subsequent iterations of the outermost loop.
|
||||
CMPQ CX, $0
|
||||
JLE loop
|
||||
MOVQ (R15), BX
|
||||
MOVQ BX, (AX)
|
||||
ADDQ $8, R15
|
||||
ADDQ $8, AX
|
||||
SUBQ $8, CX
|
||||
JMP finishSlowForwardCopy
|
||||
|
||||
verySlowForwardCopy:
|
||||
// verySlowForwardCopy is a simple implementation of forward copy. In C
|
||||
// parlance, this is a do/while loop instead of a while loop, since we know
|
||||
// that length > 0. In Go syntax:
|
||||
//
|
||||
// for {
|
||||
// dst[d] = dst[d - offset]
|
||||
// d++
|
||||
// length--
|
||||
// if length == 0 {
|
||||
// break
|
||||
// }
|
||||
// }
|
||||
MOVB (R15), BX
|
||||
MOVB BX, (DI)
|
||||
INCQ R15
|
||||
INCQ DI
|
||||
DECQ CX
|
||||
JNZ verySlowForwardCopy
|
||||
JMP loop
|
||||
|
||||
// The code above handles copy tags.
|
||||
// ----------------------------------------
|
||||
|
||||
end:
|
||||
// This is the end of the "for s < len(src)".
|
||||
//
|
||||
// if d != len(dst) { etc }
|
||||
CMPQ DI, R10
|
||||
JNE errCorrupt
|
||||
|
||||
// return 0
|
||||
MOVQ $0, ret+48(FP)
|
||||
RET
|
||||
|
||||
errCorrupt:
|
||||
// return decodeErrCodeCorrupt
|
||||
MOVQ $1, ret+48(FP)
|
||||
RET
|
||||
101
vendor/github.com/golang/snappy/decode_other.go
generated
vendored
Normal file
101
vendor/github.com/golang/snappy/decode_other.go
generated
vendored
Normal file
@ -0,0 +1,101 @@
|
||||
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !amd64 appengine !gc noasm
|
||||
|
||||
package snappy
|
||||
|
||||
// decode writes the decoding of src to dst. It assumes that the varint-encoded
|
||||
// length of the decompressed bytes has already been read, and that len(dst)
|
||||
// equals that length.
|
||||
//
|
||||
// It returns 0 on success or a decodeErrCodeXxx error code on failure.
|
||||
func decode(dst, src []byte) int {
|
||||
var d, s, offset, length int
|
||||
for s < len(src) {
|
||||
switch src[s] & 0x03 {
|
||||
case tagLiteral:
|
||||
x := uint32(src[s] >> 2)
|
||||
switch {
|
||||
case x < 60:
|
||||
s++
|
||||
case x == 60:
|
||||
s += 2
|
||||
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
x = uint32(src[s-1])
|
||||
case x == 61:
|
||||
s += 3
|
||||
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
x = uint32(src[s-2]) | uint32(src[s-1])<<8
|
||||
case x == 62:
|
||||
s += 4
|
||||
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16
|
||||
case x == 63:
|
||||
s += 5
|
||||
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24
|
||||
}
|
||||
length = int(x) + 1
|
||||
if length <= 0 {
|
||||
return decodeErrCodeUnsupportedLiteralLength
|
||||
}
|
||||
if length > len(dst)-d || length > len(src)-s {
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
copy(dst[d:], src[s:s+length])
|
||||
d += length
|
||||
s += length
|
||||
continue
|
||||
|
||||
case tagCopy1:
|
||||
s += 2
|
||||
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
length = 4 + int(src[s-2])>>2&0x7
|
||||
offset = int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1]))
|
||||
|
||||
case tagCopy2:
|
||||
s += 3
|
||||
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
length = 1 + int(src[s-3])>>2
|
||||
offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8)
|
||||
|
||||
case tagCopy4:
|
||||
s += 5
|
||||
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
length = 1 + int(src[s-5])>>2
|
||||
offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24)
|
||||
}
|
||||
|
||||
if offset <= 0 || d < offset || length > len(dst)-d {
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
// Copy from an earlier sub-slice of dst to a later sub-slice. Unlike
|
||||
// the built-in copy function, this byte-by-byte copy always runs
|
||||
// forwards, even if the slices overlap. Conceptually, this is:
|
||||
//
|
||||
// d += forwardCopy(dst[d:d+length], dst[d-offset:])
|
||||
for end := d + length; d != end; d++ {
|
||||
dst[d] = dst[d-offset]
|
||||
}
|
||||
}
|
||||
if d != len(dst) {
|
||||
return decodeErrCodeCorrupt
|
||||
}
|
||||
return 0
|
||||
}
|
||||
285
vendor/github.com/golang/snappy/encode.go
generated
vendored
Normal file
285
vendor/github.com/golang/snappy/encode.go
generated
vendored
Normal file
@ -0,0 +1,285 @@
|
||||
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package snappy
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
// Encode returns the encoded form of src. The returned slice may be a sub-
|
||||
// slice of dst if dst was large enough to hold the entire encoded block.
|
||||
// Otherwise, a newly allocated slice will be returned.
|
||||
//
|
||||
// The dst and src must not overlap. It is valid to pass a nil dst.
|
||||
func Encode(dst, src []byte) []byte {
|
||||
if n := MaxEncodedLen(len(src)); n < 0 {
|
||||
panic(ErrTooLarge)
|
||||
} else if len(dst) < n {
|
||||
dst = make([]byte, n)
|
||||
}
|
||||
|
||||
// The block starts with the varint-encoded length of the decompressed bytes.
|
||||
d := binary.PutUvarint(dst, uint64(len(src)))
|
||||
|
||||
for len(src) > 0 {
|
||||
p := src
|
||||
src = nil
|
||||
if len(p) > maxBlockSize {
|
||||
p, src = p[:maxBlockSize], p[maxBlockSize:]
|
||||
}
|
||||
if len(p) < minNonLiteralBlockSize {
|
||||
d += emitLiteral(dst[d:], p)
|
||||
} else {
|
||||
d += encodeBlock(dst[d:], p)
|
||||
}
|
||||
}
|
||||
return dst[:d]
|
||||
}
|
||||
|
||||
// inputMargin is the minimum number of extra input bytes to keep, inside
|
||||
// encodeBlock's inner loop. On some architectures, this margin lets us
|
||||
// implement a fast path for emitLiteral, where the copy of short (<= 16 byte)
|
||||
// literals can be implemented as a single load to and store from a 16-byte
|
||||
// register. That literal's actual length can be as short as 1 byte, so this
|
||||
// can copy up to 15 bytes too much, but that's OK as subsequent iterations of
|
||||
// the encoding loop will fix up the copy overrun, and this inputMargin ensures
|
||||
// that we don't overrun the dst and src buffers.
|
||||
const inputMargin = 16 - 1
|
||||
|
||||
// minNonLiteralBlockSize is the minimum size of the input to encodeBlock that
|
||||
// could be encoded with a copy tag. This is the minimum with respect to the
|
||||
// algorithm used by encodeBlock, not a minimum enforced by the file format.
|
||||
//
|
||||
// The encoded output must start with at least a 1 byte literal, as there are
|
||||
// no previous bytes to copy. A minimal (1 byte) copy after that, generated
|
||||
// from an emitCopy call in encodeBlock's main loop, would require at least
|
||||
// another inputMargin bytes, for the reason above: we want any emitLiteral
|
||||
// calls inside encodeBlock's main loop to use the fast path if possible, which
|
||||
// requires being able to overrun by inputMargin bytes. Thus,
|
||||
// minNonLiteralBlockSize equals 1 + 1 + inputMargin.
|
||||
//
|
||||
// The C++ code doesn't use this exact threshold, but it could, as discussed at
|
||||
// https://groups.google.com/d/topic/snappy-compression/oGbhsdIJSJ8/discussion
|
||||
// The difference between Go (2+inputMargin) and C++ (inputMargin) is purely an
|
||||
// optimization. It should not affect the encoded form. This is tested by
|
||||
// TestSameEncodingAsCppShortCopies.
|
||||
const minNonLiteralBlockSize = 1 + 1 + inputMargin
|
||||
|
||||
// MaxEncodedLen returns the maximum length of a snappy block, given its
|
||||
// uncompressed length.
|
||||
//
|
||||
// It will return a negative value if srcLen is too large to encode.
|
||||
func MaxEncodedLen(srcLen int) int {
|
||||
n := uint64(srcLen)
|
||||
if n > 0xffffffff {
|
||||
return -1
|
||||
}
|
||||
// Compressed data can be defined as:
|
||||
// compressed := item* literal*
|
||||
// item := literal* copy
|
||||
//
|
||||
// The trailing literal sequence has a space blowup of at most 62/60
|
||||
// since a literal of length 60 needs one tag byte + one extra byte
|
||||
// for length information.
|
||||
//
|
||||
// Item blowup is trickier to measure. Suppose the "copy" op copies
|
||||
// 4 bytes of data. Because of a special check in the encoding code,
|
||||
// we produce a 4-byte copy only if the offset is < 65536. Therefore
|
||||
// the copy op takes 3 bytes to encode, and this type of item leads
|
||||
// to at most the 62/60 blowup for representing literals.
|
||||
//
|
||||
// Suppose the "copy" op copies 5 bytes of data. If the offset is big
|
||||
// enough, it will take 5 bytes to encode the copy op. Therefore the
|
||||
// worst case here is a one-byte literal followed by a five-byte copy.
|
||||
// That is, 6 bytes of input turn into 7 bytes of "compressed" data.
|
||||
//
|
||||
// This last factor dominates the blowup, so the final estimate is:
|
||||
n = 32 + n + n/6
|
||||
if n > 0xffffffff {
|
||||
return -1
|
||||
}
|
||||
return int(n)
|
||||
}
|
||||
|
||||
var errClosed = errors.New("snappy: Writer is closed")
|
||||
|
||||
// NewWriter returns a new Writer that compresses to w.
|
||||
//
|
||||
// The Writer returned does not buffer writes. There is no need to Flush or
|
||||
// Close such a Writer.
|
||||
//
|
||||
// Deprecated: the Writer returned is not suitable for many small writes, only
|
||||
// for few large writes. Use NewBufferedWriter instead, which is efficient
|
||||
// regardless of the frequency and shape of the writes, and remember to Close
|
||||
// that Writer when done.
|
||||
func NewWriter(w io.Writer) *Writer {
|
||||
return &Writer{
|
||||
w: w,
|
||||
obuf: make([]byte, obufLen),
|
||||
}
|
||||
}
|
||||
|
||||
// NewBufferedWriter returns a new Writer that compresses to w, using the
|
||||
// framing format described at
|
||||
// https://github.com/google/snappy/blob/master/framing_format.txt
|
||||
//
|
||||
// The Writer returned buffers writes. Users must call Close to guarantee all
|
||||
// data has been forwarded to the underlying io.Writer. They may also call
|
||||
// Flush zero or more times before calling Close.
|
||||
func NewBufferedWriter(w io.Writer) *Writer {
|
||||
return &Writer{
|
||||
w: w,
|
||||
ibuf: make([]byte, 0, maxBlockSize),
|
||||
obuf: make([]byte, obufLen),
|
||||
}
|
||||
}
|
||||
|
||||
// Writer is an io.Writer that can write Snappy-compressed bytes.
|
||||
type Writer struct {
|
||||
w io.Writer
|
||||
err error
|
||||
|
||||
// ibuf is a buffer for the incoming (uncompressed) bytes.
|
||||
//
|
||||
// Its use is optional. For backwards compatibility, Writers created by the
|
||||
// NewWriter function have ibuf == nil, do not buffer incoming bytes, and
|
||||
// therefore do not need to be Flush'ed or Close'd.
|
||||
ibuf []byte
|
||||
|
||||
// obuf is a buffer for the outgoing (compressed) bytes.
|
||||
obuf []byte
|
||||
|
||||
// wroteStreamHeader is whether we have written the stream header.
|
||||
wroteStreamHeader bool
|
||||
}
|
||||
|
||||
// Reset discards the writer's state and switches the Snappy writer to write to
|
||||
// w. This permits reusing a Writer rather than allocating a new one.
|
||||
func (w *Writer) Reset(writer io.Writer) {
|
||||
w.w = writer
|
||||
w.err = nil
|
||||
if w.ibuf != nil {
|
||||
w.ibuf = w.ibuf[:0]
|
||||
}
|
||||
w.wroteStreamHeader = false
|
||||
}
|
||||
|
||||
// Write satisfies the io.Writer interface.
|
||||
func (w *Writer) Write(p []byte) (nRet int, errRet error) {
|
||||
if w.ibuf == nil {
|
||||
// Do not buffer incoming bytes. This does not perform or compress well
|
||||
// if the caller of Writer.Write writes many small slices. This
|
||||
// behavior is therefore deprecated, but still supported for backwards
|
||||
// compatibility with code that doesn't explicitly Flush or Close.
|
||||
return w.write(p)
|
||||
}
|
||||
|
||||
// The remainder of this method is based on bufio.Writer.Write from the
|
||||
// standard library.
|
||||
|
||||
for len(p) > (cap(w.ibuf)-len(w.ibuf)) && w.err == nil {
|
||||
var n int
|
||||
if len(w.ibuf) == 0 {
|
||||
// Large write, empty buffer.
|
||||
// Write directly from p to avoid copy.
|
||||
n, _ = w.write(p)
|
||||
} else {
|
||||
n = copy(w.ibuf[len(w.ibuf):cap(w.ibuf)], p)
|
||||
w.ibuf = w.ibuf[:len(w.ibuf)+n]
|
||||
w.Flush()
|
||||
}
|
||||
nRet += n
|
||||
p = p[n:]
|
||||
}
|
||||
if w.err != nil {
|
||||
return nRet, w.err
|
||||
}
|
||||
n := copy(w.ibuf[len(w.ibuf):cap(w.ibuf)], p)
|
||||
w.ibuf = w.ibuf[:len(w.ibuf)+n]
|
||||
nRet += n
|
||||
return nRet, nil
|
||||
}
|
||||
|
||||
func (w *Writer) write(p []byte) (nRet int, errRet error) {
|
||||
if w.err != nil {
|
||||
return 0, w.err
|
||||
}
|
||||
for len(p) > 0 {
|
||||
obufStart := len(magicChunk)
|
||||
if !w.wroteStreamHeader {
|
||||
w.wroteStreamHeader = true
|
||||
copy(w.obuf, magicChunk)
|
||||
obufStart = 0
|
||||
}
|
||||
|
||||
var uncompressed []byte
|
||||
if len(p) > maxBlockSize {
|
||||
uncompressed, p = p[:maxBlockSize], p[maxBlockSize:]
|
||||
} else {
|
||||
uncompressed, p = p, nil
|
||||
}
|
||||
checksum := crc(uncompressed)
|
||||
|
||||
// Compress the buffer, discarding the result if the improvement
|
||||
// isn't at least 12.5%.
|
||||
compressed := Encode(w.obuf[obufHeaderLen:], uncompressed)
|
||||
chunkType := uint8(chunkTypeCompressedData)
|
||||
chunkLen := 4 + len(compressed)
|
||||
obufEnd := obufHeaderLen + len(compressed)
|
||||
if len(compressed) >= len(uncompressed)-len(uncompressed)/8 {
|
||||
chunkType = chunkTypeUncompressedData
|
||||
chunkLen = 4 + len(uncompressed)
|
||||
obufEnd = obufHeaderLen
|
||||
}
|
||||
|
||||
// Fill in the per-chunk header that comes before the body.
|
||||
w.obuf[len(magicChunk)+0] = chunkType
|
||||
w.obuf[len(magicChunk)+1] = uint8(chunkLen >> 0)
|
||||
w.obuf[len(magicChunk)+2] = uint8(chunkLen >> 8)
|
||||
w.obuf[len(magicChunk)+3] = uint8(chunkLen >> 16)
|
||||
w.obuf[len(magicChunk)+4] = uint8(checksum >> 0)
|
||||
w.obuf[len(magicChunk)+5] = uint8(checksum >> 8)
|
||||
w.obuf[len(magicChunk)+6] = uint8(checksum >> 16)
|
||||
w.obuf[len(magicChunk)+7] = uint8(checksum >> 24)
|
||||
|
||||
if _, err := w.w.Write(w.obuf[obufStart:obufEnd]); err != nil {
|
||||
w.err = err
|
||||
return nRet, err
|
||||
}
|
||||
if chunkType == chunkTypeUncompressedData {
|
||||
if _, err := w.w.Write(uncompressed); err != nil {
|
||||
w.err = err
|
||||
return nRet, err
|
||||
}
|
||||
}
|
||||
nRet += len(uncompressed)
|
||||
}
|
||||
return nRet, nil
|
||||
}
|
||||
|
||||
// Flush flushes the Writer to its underlying io.Writer.
|
||||
func (w *Writer) Flush() error {
|
||||
if w.err != nil {
|
||||
return w.err
|
||||
}
|
||||
if len(w.ibuf) == 0 {
|
||||
return nil
|
||||
}
|
||||
w.write(w.ibuf)
|
||||
w.ibuf = w.ibuf[:0]
|
||||
return w.err
|
||||
}
|
||||
|
||||
// Close calls Flush and then closes the Writer.
|
||||
func (w *Writer) Close() error {
|
||||
w.Flush()
|
||||
ret := w.err
|
||||
if w.err == nil {
|
||||
w.err = errClosed
|
||||
}
|
||||
return ret
|
||||
}
|
||||
29
vendor/github.com/golang/snappy/encode_amd64.go
generated
vendored
Normal file
29
vendor/github.com/golang/snappy/encode_amd64.go
generated
vendored
Normal file
@ -0,0 +1,29 @@
|
||||
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !appengine
|
||||
// +build gc
|
||||
// +build !noasm
|
||||
|
||||
package snappy
|
||||
|
||||
// emitLiteral has the same semantics as in encode_other.go.
|
||||
//
|
||||
//go:noescape
|
||||
func emitLiteral(dst, lit []byte) int
|
||||
|
||||
// emitCopy has the same semantics as in encode_other.go.
|
||||
//
|
||||
//go:noescape
|
||||
func emitCopy(dst []byte, offset, length int) int
|
||||
|
||||
// extendMatch has the same semantics as in encode_other.go.
|
||||
//
|
||||
//go:noescape
|
||||
func extendMatch(src []byte, i, j int) int
|
||||
|
||||
// encodeBlock has the same semantics as in encode_other.go.
|
||||
//
|
||||
//go:noescape
|
||||
func encodeBlock(dst, src []byte) (d int)
|
||||
730
vendor/github.com/golang/snappy/encode_amd64.s
generated
vendored
Normal file
730
vendor/github.com/golang/snappy/encode_amd64.s
generated
vendored
Normal file
@ -0,0 +1,730 @@
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !appengine
|
||||
// +build gc
|
||||
// +build !noasm
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// The XXX lines assemble on Go 1.4, 1.5 and 1.7, but not 1.6, due to a
|
||||
// Go toolchain regression. See https://github.com/golang/go/issues/15426 and
|
||||
// https://github.com/golang/snappy/issues/29
|
||||
//
|
||||
// As a workaround, the package was built with a known good assembler, and
|
||||
// those instructions were disassembled by "objdump -d" to yield the
|
||||
// 4e 0f b7 7c 5c 78 movzwq 0x78(%rsp,%r11,2),%r15
|
||||
// style comments, in AT&T asm syntax. Note that rsp here is a physical
|
||||
// register, not Go/asm's SP pseudo-register (see https://golang.org/doc/asm).
|
||||
// The instructions were then encoded as "BYTE $0x.." sequences, which assemble
|
||||
// fine on Go 1.6.
|
||||
|
||||
// The asm code generally follows the pure Go code in encode_other.go, except
|
||||
// where marked with a "!!!".
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
// func emitLiteral(dst, lit []byte) int
|
||||
//
|
||||
// All local variables fit into registers. The register allocation:
|
||||
// - AX len(lit)
|
||||
// - BX n
|
||||
// - DX return value
|
||||
// - DI &dst[i]
|
||||
// - R10 &lit[0]
|
||||
//
|
||||
// The 24 bytes of stack space is to call runtime·memmove.
|
||||
//
|
||||
// The unusual register allocation of local variables, such as R10 for the
|
||||
// source pointer, matches the allocation used at the call site in encodeBlock,
|
||||
// which makes it easier to manually inline this function.
|
||||
TEXT ·emitLiteral(SB), NOSPLIT, $24-56
|
||||
MOVQ dst_base+0(FP), DI
|
||||
MOVQ lit_base+24(FP), R10
|
||||
MOVQ lit_len+32(FP), AX
|
||||
MOVQ AX, DX
|
||||
MOVL AX, BX
|
||||
SUBL $1, BX
|
||||
|
||||
CMPL BX, $60
|
||||
JLT oneByte
|
||||
CMPL BX, $256
|
||||
JLT twoBytes
|
||||
|
||||
threeBytes:
|
||||
MOVB $0xf4, 0(DI)
|
||||
MOVW BX, 1(DI)
|
||||
ADDQ $3, DI
|
||||
ADDQ $3, DX
|
||||
JMP memmove
|
||||
|
||||
twoBytes:
|
||||
MOVB $0xf0, 0(DI)
|
||||
MOVB BX, 1(DI)
|
||||
ADDQ $2, DI
|
||||
ADDQ $2, DX
|
||||
JMP memmove
|
||||
|
||||
oneByte:
|
||||
SHLB $2, BX
|
||||
MOVB BX, 0(DI)
|
||||
ADDQ $1, DI
|
||||
ADDQ $1, DX
|
||||
|
||||
memmove:
|
||||
MOVQ DX, ret+48(FP)
|
||||
|
||||
// copy(dst[i:], lit)
|
||||
//
|
||||
// This means calling runtime·memmove(&dst[i], &lit[0], len(lit)), so we push
|
||||
// DI, R10 and AX as arguments.
|
||||
MOVQ DI, 0(SP)
|
||||
MOVQ R10, 8(SP)
|
||||
MOVQ AX, 16(SP)
|
||||
CALL runtime·memmove(SB)
|
||||
RET
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
// func emitCopy(dst []byte, offset, length int) int
|
||||
//
|
||||
// All local variables fit into registers. The register allocation:
|
||||
// - AX length
|
||||
// - SI &dst[0]
|
||||
// - DI &dst[i]
|
||||
// - R11 offset
|
||||
//
|
||||
// The unusual register allocation of local variables, such as R11 for the
|
||||
// offset, matches the allocation used at the call site in encodeBlock, which
|
||||
// makes it easier to manually inline this function.
|
||||
TEXT ·emitCopy(SB), NOSPLIT, $0-48
|
||||
MOVQ dst_base+0(FP), DI
|
||||
MOVQ DI, SI
|
||||
MOVQ offset+24(FP), R11
|
||||
MOVQ length+32(FP), AX
|
||||
|
||||
loop0:
|
||||
// for length >= 68 { etc }
|
||||
CMPL AX, $68
|
||||
JLT step1
|
||||
|
||||
// Emit a length 64 copy, encoded as 3 bytes.
|
||||
MOVB $0xfe, 0(DI)
|
||||
MOVW R11, 1(DI)
|
||||
ADDQ $3, DI
|
||||
SUBL $64, AX
|
||||
JMP loop0
|
||||
|
||||
step1:
|
||||
// if length > 64 { etc }
|
||||
CMPL AX, $64
|
||||
JLE step2
|
||||
|
||||
// Emit a length 60 copy, encoded as 3 bytes.
|
||||
MOVB $0xee, 0(DI)
|
||||
MOVW R11, 1(DI)
|
||||
ADDQ $3, DI
|
||||
SUBL $60, AX
|
||||
|
||||
step2:
|
||||
// if length >= 12 || offset >= 2048 { goto step3 }
|
||||
CMPL AX, $12
|
||||
JGE step3
|
||||
CMPL R11, $2048
|
||||
JGE step3
|
||||
|
||||
// Emit the remaining copy, encoded as 2 bytes.
|
||||
MOVB R11, 1(DI)
|
||||
SHRL $8, R11
|
||||
SHLB $5, R11
|
||||
SUBB $4, AX
|
||||
SHLB $2, AX
|
||||
ORB AX, R11
|
||||
ORB $1, R11
|
||||
MOVB R11, 0(DI)
|
||||
ADDQ $2, DI
|
||||
|
||||
// Return the number of bytes written.
|
||||
SUBQ SI, DI
|
||||
MOVQ DI, ret+40(FP)
|
||||
RET
|
||||
|
||||
step3:
|
||||
// Emit the remaining copy, encoded as 3 bytes.
|
||||
SUBL $1, AX
|
||||
SHLB $2, AX
|
||||
ORB $2, AX
|
||||
MOVB AX, 0(DI)
|
||||
MOVW R11, 1(DI)
|
||||
ADDQ $3, DI
|
||||
|
||||
// Return the number of bytes written.
|
||||
SUBQ SI, DI
|
||||
MOVQ DI, ret+40(FP)
|
||||
RET
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
// func extendMatch(src []byte, i, j int) int
|
||||
//
|
||||
// All local variables fit into registers. The register allocation:
|
||||
// - DX &src[0]
|
||||
// - SI &src[j]
|
||||
// - R13 &src[len(src) - 8]
|
||||
// - R14 &src[len(src)]
|
||||
// - R15 &src[i]
|
||||
//
|
||||
// The unusual register allocation of local variables, such as R15 for a source
|
||||
// pointer, matches the allocation used at the call site in encodeBlock, which
|
||||
// makes it easier to manually inline this function.
|
||||
TEXT ·extendMatch(SB), NOSPLIT, $0-48
|
||||
MOVQ src_base+0(FP), DX
|
||||
MOVQ src_len+8(FP), R14
|
||||
MOVQ i+24(FP), R15
|
||||
MOVQ j+32(FP), SI
|
||||
ADDQ DX, R14
|
||||
ADDQ DX, R15
|
||||
ADDQ DX, SI
|
||||
MOVQ R14, R13
|
||||
SUBQ $8, R13
|
||||
|
||||
cmp8:
|
||||
// As long as we are 8 or more bytes before the end of src, we can load and
|
||||
// compare 8 bytes at a time. If those 8 bytes are equal, repeat.
|
||||
CMPQ SI, R13
|
||||
JA cmp1
|
||||
MOVQ (R15), AX
|
||||
MOVQ (SI), BX
|
||||
CMPQ AX, BX
|
||||
JNE bsf
|
||||
ADDQ $8, R15
|
||||
ADDQ $8, SI
|
||||
JMP cmp8
|
||||
|
||||
bsf:
|
||||
// If those 8 bytes were not equal, XOR the two 8 byte values, and return
|
||||
// the index of the first byte that differs. The BSF instruction finds the
|
||||
// least significant 1 bit, the amd64 architecture is little-endian, and
|
||||
// the shift by 3 converts a bit index to a byte index.
|
||||
XORQ AX, BX
|
||||
BSFQ BX, BX
|
||||
SHRQ $3, BX
|
||||
ADDQ BX, SI
|
||||
|
||||
// Convert from &src[ret] to ret.
|
||||
SUBQ DX, SI
|
||||
MOVQ SI, ret+40(FP)
|
||||
RET
|
||||
|
||||
cmp1:
|
||||
// In src's tail, compare 1 byte at a time.
|
||||
CMPQ SI, R14
|
||||
JAE extendMatchEnd
|
||||
MOVB (R15), AX
|
||||
MOVB (SI), BX
|
||||
CMPB AX, BX
|
||||
JNE extendMatchEnd
|
||||
ADDQ $1, R15
|
||||
ADDQ $1, SI
|
||||
JMP cmp1
|
||||
|
||||
extendMatchEnd:
|
||||
// Convert from &src[ret] to ret.
|
||||
SUBQ DX, SI
|
||||
MOVQ SI, ret+40(FP)
|
||||
RET
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
// func encodeBlock(dst, src []byte) (d int)
|
||||
//
|
||||
// All local variables fit into registers, other than "var table". The register
|
||||
// allocation:
|
||||
// - AX . .
|
||||
// - BX . .
|
||||
// - CX 56 shift (note that amd64 shifts by non-immediates must use CX).
|
||||
// - DX 64 &src[0], tableSize
|
||||
// - SI 72 &src[s]
|
||||
// - DI 80 &dst[d]
|
||||
// - R9 88 sLimit
|
||||
// - R10 . &src[nextEmit]
|
||||
// - R11 96 prevHash, currHash, nextHash, offset
|
||||
// - R12 104 &src[base], skip
|
||||
// - R13 . &src[nextS], &src[len(src) - 8]
|
||||
// - R14 . len(src), bytesBetweenHashLookups, &src[len(src)], x
|
||||
// - R15 112 candidate
|
||||
//
|
||||
// The second column (56, 64, etc) is the stack offset to spill the registers
|
||||
// when calling other functions. We could pack this slightly tighter, but it's
|
||||
// simpler to have a dedicated spill map independent of the function called.
|
||||
//
|
||||
// "var table [maxTableSize]uint16" takes up 32768 bytes of stack space. An
|
||||
// extra 56 bytes, to call other functions, and an extra 64 bytes, to spill
|
||||
// local variables (registers) during calls gives 32768 + 56 + 64 = 32888.
|
||||
TEXT ·encodeBlock(SB), 0, $32888-56
|
||||
MOVQ dst_base+0(FP), DI
|
||||
MOVQ src_base+24(FP), SI
|
||||
MOVQ src_len+32(FP), R14
|
||||
|
||||
// shift, tableSize := uint32(32-8), 1<<8
|
||||
MOVQ $24, CX
|
||||
MOVQ $256, DX
|
||||
|
||||
calcShift:
|
||||
// for ; tableSize < maxTableSize && tableSize < len(src); tableSize *= 2 {
|
||||
// shift--
|
||||
// }
|
||||
CMPQ DX, $16384
|
||||
JGE varTable
|
||||
CMPQ DX, R14
|
||||
JGE varTable
|
||||
SUBQ $1, CX
|
||||
SHLQ $1, DX
|
||||
JMP calcShift
|
||||
|
||||
varTable:
|
||||
// var table [maxTableSize]uint16
|
||||
//
|
||||
// In the asm code, unlike the Go code, we can zero-initialize only the
|
||||
// first tableSize elements. Each uint16 element is 2 bytes and each MOVOU
|
||||
// writes 16 bytes, so we can do only tableSize/8 writes instead of the
|
||||
// 2048 writes that would zero-initialize all of table's 32768 bytes.
|
||||
SHRQ $3, DX
|
||||
LEAQ table-32768(SP), BX
|
||||
PXOR X0, X0
|
||||
|
||||
memclr:
|
||||
MOVOU X0, 0(BX)
|
||||
ADDQ $16, BX
|
||||
SUBQ $1, DX
|
||||
JNZ memclr
|
||||
|
||||
// !!! DX = &src[0]
|
||||
MOVQ SI, DX
|
||||
|
||||
// sLimit := len(src) - inputMargin
|
||||
MOVQ R14, R9
|
||||
SUBQ $15, R9
|
||||
|
||||
// !!! Pre-emptively spill CX, DX and R9 to the stack. Their values don't
|
||||
// change for the rest of the function.
|
||||
MOVQ CX, 56(SP)
|
||||
MOVQ DX, 64(SP)
|
||||
MOVQ R9, 88(SP)
|
||||
|
||||
// nextEmit := 0
|
||||
MOVQ DX, R10
|
||||
|
||||
// s := 1
|
||||
ADDQ $1, SI
|
||||
|
||||
// nextHash := hash(load32(src, s), shift)
|
||||
MOVL 0(SI), R11
|
||||
IMULL $0x1e35a7bd, R11
|
||||
SHRL CX, R11
|
||||
|
||||
outer:
|
||||
// for { etc }
|
||||
|
||||
// skip := 32
|
||||
MOVQ $32, R12
|
||||
|
||||
// nextS := s
|
||||
MOVQ SI, R13
|
||||
|
||||
// candidate := 0
|
||||
MOVQ $0, R15
|
||||
|
||||
inner0:
|
||||
// for { etc }
|
||||
|
||||
// s := nextS
|
||||
MOVQ R13, SI
|
||||
|
||||
// bytesBetweenHashLookups := skip >> 5
|
||||
MOVQ R12, R14
|
||||
SHRQ $5, R14
|
||||
|
||||
// nextS = s + bytesBetweenHashLookups
|
||||
ADDQ R14, R13
|
||||
|
||||
// skip += bytesBetweenHashLookups
|
||||
ADDQ R14, R12
|
||||
|
||||
// if nextS > sLimit { goto emitRemainder }
|
||||
MOVQ R13, AX
|
||||
SUBQ DX, AX
|
||||
CMPQ AX, R9
|
||||
JA emitRemainder
|
||||
|
||||
// candidate = int(table[nextHash])
|
||||
// XXX: MOVWQZX table-32768(SP)(R11*2), R15
|
||||
// XXX: 4e 0f b7 7c 5c 78 movzwq 0x78(%rsp,%r11,2),%r15
|
||||
BYTE $0x4e
|
||||
BYTE $0x0f
|
||||
BYTE $0xb7
|
||||
BYTE $0x7c
|
||||
BYTE $0x5c
|
||||
BYTE $0x78
|
||||
|
||||
// table[nextHash] = uint16(s)
|
||||
MOVQ SI, AX
|
||||
SUBQ DX, AX
|
||||
|
||||
// XXX: MOVW AX, table-32768(SP)(R11*2)
|
||||
// XXX: 66 42 89 44 5c 78 mov %ax,0x78(%rsp,%r11,2)
|
||||
BYTE $0x66
|
||||
BYTE $0x42
|
||||
BYTE $0x89
|
||||
BYTE $0x44
|
||||
BYTE $0x5c
|
||||
BYTE $0x78
|
||||
|
||||
// nextHash = hash(load32(src, nextS), shift)
|
||||
MOVL 0(R13), R11
|
||||
IMULL $0x1e35a7bd, R11
|
||||
SHRL CX, R11
|
||||
|
||||
// if load32(src, s) != load32(src, candidate) { continue } break
|
||||
MOVL 0(SI), AX
|
||||
MOVL (DX)(R15*1), BX
|
||||
CMPL AX, BX
|
||||
JNE inner0
|
||||
|
||||
fourByteMatch:
|
||||
// As per the encode_other.go code:
|
||||
//
|
||||
// A 4-byte match has been found. We'll later see etc.
|
||||
|
||||
// !!! Jump to a fast path for short (<= 16 byte) literals. See the comment
|
||||
// on inputMargin in encode.go.
|
||||
MOVQ SI, AX
|
||||
SUBQ R10, AX
|
||||
CMPQ AX, $16
|
||||
JLE emitLiteralFastPath
|
||||
|
||||
// ----------------------------------------
|
||||
// Begin inline of the emitLiteral call.
|
||||
//
|
||||
// d += emitLiteral(dst[d:], src[nextEmit:s])
|
||||
|
||||
MOVL AX, BX
|
||||
SUBL $1, BX
|
||||
|
||||
CMPL BX, $60
|
||||
JLT inlineEmitLiteralOneByte
|
||||
CMPL BX, $256
|
||||
JLT inlineEmitLiteralTwoBytes
|
||||
|
||||
inlineEmitLiteralThreeBytes:
|
||||
MOVB $0xf4, 0(DI)
|
||||
MOVW BX, 1(DI)
|
||||
ADDQ $3, DI
|
||||
JMP inlineEmitLiteralMemmove
|
||||
|
||||
inlineEmitLiteralTwoBytes:
|
||||
MOVB $0xf0, 0(DI)
|
||||
MOVB BX, 1(DI)
|
||||
ADDQ $2, DI
|
||||
JMP inlineEmitLiteralMemmove
|
||||
|
||||
inlineEmitLiteralOneByte:
|
||||
SHLB $2, BX
|
||||
MOVB BX, 0(DI)
|
||||
ADDQ $1, DI
|
||||
|
||||
inlineEmitLiteralMemmove:
|
||||
// Spill local variables (registers) onto the stack; call; unspill.
|
||||
//
|
||||
// copy(dst[i:], lit)
|
||||
//
|
||||
// This means calling runtime·memmove(&dst[i], &lit[0], len(lit)), so we push
|
||||
// DI, R10 and AX as arguments.
|
||||
MOVQ DI, 0(SP)
|
||||
MOVQ R10, 8(SP)
|
||||
MOVQ AX, 16(SP)
|
||||
ADDQ AX, DI // Finish the "d +=" part of "d += emitLiteral(etc)".
|
||||
MOVQ SI, 72(SP)
|
||||
MOVQ DI, 80(SP)
|
||||
MOVQ R15, 112(SP)
|
||||
CALL runtime·memmove(SB)
|
||||
MOVQ 56(SP), CX
|
||||
MOVQ 64(SP), DX
|
||||
MOVQ 72(SP), SI
|
||||
MOVQ 80(SP), DI
|
||||
MOVQ 88(SP), R9
|
||||
MOVQ 112(SP), R15
|
||||
JMP inner1
|
||||
|
||||
inlineEmitLiteralEnd:
|
||||
// End inline of the emitLiteral call.
|
||||
// ----------------------------------------
|
||||
|
||||
emitLiteralFastPath:
|
||||
// !!! Emit the 1-byte encoding "uint8(len(lit)-1)<<2".
|
||||
MOVB AX, BX
|
||||
SUBB $1, BX
|
||||
SHLB $2, BX
|
||||
MOVB BX, (DI)
|
||||
ADDQ $1, DI
|
||||
|
||||
// !!! Implement the copy from lit to dst as a 16-byte load and store.
|
||||
// (Encode's documentation says that dst and src must not overlap.)
|
||||
//
|
||||
// This always copies 16 bytes, instead of only len(lit) bytes, but that's
|
||||
// OK. Subsequent iterations will fix up the overrun.
|
||||
//
|
||||
// Note that on amd64, it is legal and cheap to issue unaligned 8-byte or
|
||||
// 16-byte loads and stores. This technique probably wouldn't be as
|
||||
// effective on architectures that are fussier about alignment.
|
||||
MOVOU 0(R10), X0
|
||||
MOVOU X0, 0(DI)
|
||||
ADDQ AX, DI
|
||||
|
||||
inner1:
|
||||
// for { etc }
|
||||
|
||||
// base := s
|
||||
MOVQ SI, R12
|
||||
|
||||
// !!! offset := base - candidate
|
||||
MOVQ R12, R11
|
||||
SUBQ R15, R11
|
||||
SUBQ DX, R11
|
||||
|
||||
// ----------------------------------------
|
||||
// Begin inline of the extendMatch call.
|
||||
//
|
||||
// s = extendMatch(src, candidate+4, s+4)
|
||||
|
||||
// !!! R14 = &src[len(src)]
|
||||
MOVQ src_len+32(FP), R14
|
||||
ADDQ DX, R14
|
||||
|
||||
// !!! R13 = &src[len(src) - 8]
|
||||
MOVQ R14, R13
|
||||
SUBQ $8, R13
|
||||
|
||||
// !!! R15 = &src[candidate + 4]
|
||||
ADDQ $4, R15
|
||||
ADDQ DX, R15
|
||||
|
||||
// !!! s += 4
|
||||
ADDQ $4, SI
|
||||
|
||||
inlineExtendMatchCmp8:
|
||||
// As long as we are 8 or more bytes before the end of src, we can load and
|
||||
// compare 8 bytes at a time. If those 8 bytes are equal, repeat.
|
||||
CMPQ SI, R13
|
||||
JA inlineExtendMatchCmp1
|
||||
MOVQ (R15), AX
|
||||
MOVQ (SI), BX
|
||||
CMPQ AX, BX
|
||||
JNE inlineExtendMatchBSF
|
||||
ADDQ $8, R15
|
||||
ADDQ $8, SI
|
||||
JMP inlineExtendMatchCmp8
|
||||
|
||||
inlineExtendMatchBSF:
|
||||
// If those 8 bytes were not equal, XOR the two 8 byte values, and return
|
||||
// the index of the first byte that differs. The BSF instruction finds the
|
||||
// least significant 1 bit, the amd64 architecture is little-endian, and
|
||||
// the shift by 3 converts a bit index to a byte index.
|
||||
XORQ AX, BX
|
||||
BSFQ BX, BX
|
||||
SHRQ $3, BX
|
||||
ADDQ BX, SI
|
||||
JMP inlineExtendMatchEnd
|
||||
|
||||
inlineExtendMatchCmp1:
|
||||
// In src's tail, compare 1 byte at a time.
|
||||
CMPQ SI, R14
|
||||
JAE inlineExtendMatchEnd
|
||||
MOVB (R15), AX
|
||||
MOVB (SI), BX
|
||||
CMPB AX, BX
|
||||
JNE inlineExtendMatchEnd
|
||||
ADDQ $1, R15
|
||||
ADDQ $1, SI
|
||||
JMP inlineExtendMatchCmp1
|
||||
|
||||
inlineExtendMatchEnd:
|
||||
// End inline of the extendMatch call.
|
||||
// ----------------------------------------
|
||||
|
||||
// ----------------------------------------
|
||||
// Begin inline of the emitCopy call.
|
||||
//
|
||||
// d += emitCopy(dst[d:], base-candidate, s-base)
|
||||
|
||||
// !!! length := s - base
|
||||
MOVQ SI, AX
|
||||
SUBQ R12, AX
|
||||
|
||||
inlineEmitCopyLoop0:
|
||||
// for length >= 68 { etc }
|
||||
CMPL AX, $68
|
||||
JLT inlineEmitCopyStep1
|
||||
|
||||
// Emit a length 64 copy, encoded as 3 bytes.
|
||||
MOVB $0xfe, 0(DI)
|
||||
MOVW R11, 1(DI)
|
||||
ADDQ $3, DI
|
||||
SUBL $64, AX
|
||||
JMP inlineEmitCopyLoop0
|
||||
|
||||
inlineEmitCopyStep1:
|
||||
// if length > 64 { etc }
|
||||
CMPL AX, $64
|
||||
JLE inlineEmitCopyStep2
|
||||
|
||||
// Emit a length 60 copy, encoded as 3 bytes.
|
||||
MOVB $0xee, 0(DI)
|
||||
MOVW R11, 1(DI)
|
||||
ADDQ $3, DI
|
||||
SUBL $60, AX
|
||||
|
||||
inlineEmitCopyStep2:
|
||||
// if length >= 12 || offset >= 2048 { goto inlineEmitCopyStep3 }
|
||||
CMPL AX, $12
|
||||
JGE inlineEmitCopyStep3
|
||||
CMPL R11, $2048
|
||||
JGE inlineEmitCopyStep3
|
||||
|
||||
// Emit the remaining copy, encoded as 2 bytes.
|
||||
MOVB R11, 1(DI)
|
||||
SHRL $8, R11
|
||||
SHLB $5, R11
|
||||
SUBB $4, AX
|
||||
SHLB $2, AX
|
||||
ORB AX, R11
|
||||
ORB $1, R11
|
||||
MOVB R11, 0(DI)
|
||||
ADDQ $2, DI
|
||||
JMP inlineEmitCopyEnd
|
||||
|
||||
inlineEmitCopyStep3:
|
||||
// Emit the remaining copy, encoded as 3 bytes.
|
||||
SUBL $1, AX
|
||||
SHLB $2, AX
|
||||
ORB $2, AX
|
||||
MOVB AX, 0(DI)
|
||||
MOVW R11, 1(DI)
|
||||
ADDQ $3, DI
|
||||
|
||||
inlineEmitCopyEnd:
|
||||
// End inline of the emitCopy call.
|
||||
// ----------------------------------------
|
||||
|
||||
// nextEmit = s
|
||||
MOVQ SI, R10
|
||||
|
||||
// if s >= sLimit { goto emitRemainder }
|
||||
MOVQ SI, AX
|
||||
SUBQ DX, AX
|
||||
CMPQ AX, R9
|
||||
JAE emitRemainder
|
||||
|
||||
// As per the encode_other.go code:
|
||||
//
|
||||
// We could immediately etc.
|
||||
|
||||
// x := load64(src, s-1)
|
||||
MOVQ -1(SI), R14
|
||||
|
||||
// prevHash := hash(uint32(x>>0), shift)
|
||||
MOVL R14, R11
|
||||
IMULL $0x1e35a7bd, R11
|
||||
SHRL CX, R11
|
||||
|
||||
// table[prevHash] = uint16(s-1)
|
||||
MOVQ SI, AX
|
||||
SUBQ DX, AX
|
||||
SUBQ $1, AX
|
||||
|
||||
// XXX: MOVW AX, table-32768(SP)(R11*2)
|
||||
// XXX: 66 42 89 44 5c 78 mov %ax,0x78(%rsp,%r11,2)
|
||||
BYTE $0x66
|
||||
BYTE $0x42
|
||||
BYTE $0x89
|
||||
BYTE $0x44
|
||||
BYTE $0x5c
|
||||
BYTE $0x78
|
||||
|
||||
// currHash := hash(uint32(x>>8), shift)
|
||||
SHRQ $8, R14
|
||||
MOVL R14, R11
|
||||
IMULL $0x1e35a7bd, R11
|
||||
SHRL CX, R11
|
||||
|
||||
// candidate = int(table[currHash])
|
||||
// XXX: MOVWQZX table-32768(SP)(R11*2), R15
|
||||
// XXX: 4e 0f b7 7c 5c 78 movzwq 0x78(%rsp,%r11,2),%r15
|
||||
BYTE $0x4e
|
||||
BYTE $0x0f
|
||||
BYTE $0xb7
|
||||
BYTE $0x7c
|
||||
BYTE $0x5c
|
||||
BYTE $0x78
|
||||
|
||||
// table[currHash] = uint16(s)
|
||||
ADDQ $1, AX
|
||||
|
||||
// XXX: MOVW AX, table-32768(SP)(R11*2)
|
||||
// XXX: 66 42 89 44 5c 78 mov %ax,0x78(%rsp,%r11,2)
|
||||
BYTE $0x66
|
||||
BYTE $0x42
|
||||
BYTE $0x89
|
||||
BYTE $0x44
|
||||
BYTE $0x5c
|
||||
BYTE $0x78
|
||||
|
||||
// if uint32(x>>8) == load32(src, candidate) { continue }
|
||||
MOVL (DX)(R15*1), BX
|
||||
CMPL R14, BX
|
||||
JEQ inner1
|
||||
|
||||
// nextHash = hash(uint32(x>>16), shift)
|
||||
SHRQ $8, R14
|
||||
MOVL R14, R11
|
||||
IMULL $0x1e35a7bd, R11
|
||||
SHRL CX, R11
|
||||
|
||||
// s++
|
||||
ADDQ $1, SI
|
||||
|
||||
// break out of the inner1 for loop, i.e. continue the outer loop.
|
||||
JMP outer
|
||||
|
||||
emitRemainder:
|
||||
// if nextEmit < len(src) { etc }
|
||||
MOVQ src_len+32(FP), AX
|
||||
ADDQ DX, AX
|
||||
CMPQ R10, AX
|
||||
JEQ encodeBlockEnd
|
||||
|
||||
// d += emitLiteral(dst[d:], src[nextEmit:])
|
||||
//
|
||||
// Push args.
|
||||
MOVQ DI, 0(SP)
|
||||
MOVQ $0, 8(SP) // Unnecessary, as the callee ignores it, but conservative.
|
||||
MOVQ $0, 16(SP) // Unnecessary, as the callee ignores it, but conservative.
|
||||
MOVQ R10, 24(SP)
|
||||
SUBQ R10, AX
|
||||
MOVQ AX, 32(SP)
|
||||
MOVQ AX, 40(SP) // Unnecessary, as the callee ignores it, but conservative.
|
||||
|
||||
// Spill local variables (registers) onto the stack; call; unspill.
|
||||
MOVQ DI, 80(SP)
|
||||
CALL ·emitLiteral(SB)
|
||||
MOVQ 80(SP), DI
|
||||
|
||||
// Finish the "d +=" part of "d += emitLiteral(etc)".
|
||||
ADDQ 48(SP), DI
|
||||
|
||||
encodeBlockEnd:
|
||||
MOVQ dst_base+0(FP), AX
|
||||
SUBQ AX, DI
|
||||
MOVQ DI, d+48(FP)
|
||||
RET
|
||||
238
vendor/github.com/golang/snappy/encode_other.go
generated
vendored
Normal file
238
vendor/github.com/golang/snappy/encode_other.go
generated
vendored
Normal file
@ -0,0 +1,238 @@
|
||||
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !amd64 appengine !gc noasm
|
||||
|
||||
package snappy
|
||||
|
||||
func load32(b []byte, i int) uint32 {
|
||||
b = b[i : i+4 : len(b)] // Help the compiler eliminate bounds checks on the next line.
|
||||
return uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24
|
||||
}
|
||||
|
||||
func load64(b []byte, i int) uint64 {
|
||||
b = b[i : i+8 : len(b)] // Help the compiler eliminate bounds checks on the next line.
|
||||
return uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 |
|
||||
uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56
|
||||
}
|
||||
|
||||
// emitLiteral writes a literal chunk and returns the number of bytes written.
|
||||
//
|
||||
// It assumes that:
|
||||
// dst is long enough to hold the encoded bytes
|
||||
// 1 <= len(lit) && len(lit) <= 65536
|
||||
func emitLiteral(dst, lit []byte) int {
|
||||
i, n := 0, uint(len(lit)-1)
|
||||
switch {
|
||||
case n < 60:
|
||||
dst[0] = uint8(n)<<2 | tagLiteral
|
||||
i = 1
|
||||
case n < 1<<8:
|
||||
dst[0] = 60<<2 | tagLiteral
|
||||
dst[1] = uint8(n)
|
||||
i = 2
|
||||
default:
|
||||
dst[0] = 61<<2 | tagLiteral
|
||||
dst[1] = uint8(n)
|
||||
dst[2] = uint8(n >> 8)
|
||||
i = 3
|
||||
}
|
||||
return i + copy(dst[i:], lit)
|
||||
}
|
||||
|
||||
// emitCopy writes a copy chunk and returns the number of bytes written.
|
||||
//
|
||||
// It assumes that:
|
||||
// dst is long enough to hold the encoded bytes
|
||||
// 1 <= offset && offset <= 65535
|
||||
// 4 <= length && length <= 65535
|
||||
func emitCopy(dst []byte, offset, length int) int {
|
||||
i := 0
|
||||
// The maximum length for a single tagCopy1 or tagCopy2 op is 64 bytes. The
|
||||
// threshold for this loop is a little higher (at 68 = 64 + 4), and the
|
||||
// length emitted down below is is a little lower (at 60 = 64 - 4), because
|
||||
// it's shorter to encode a length 67 copy as a length 60 tagCopy2 followed
|
||||
// by a length 7 tagCopy1 (which encodes as 3+2 bytes) than to encode it as
|
||||
// a length 64 tagCopy2 followed by a length 3 tagCopy2 (which encodes as
|
||||
// 3+3 bytes). The magic 4 in the 64±4 is because the minimum length for a
|
||||
// tagCopy1 op is 4 bytes, which is why a length 3 copy has to be an
|
||||
// encodes-as-3-bytes tagCopy2 instead of an encodes-as-2-bytes tagCopy1.
|
||||
for length >= 68 {
|
||||
// Emit a length 64 copy, encoded as 3 bytes.
|
||||
dst[i+0] = 63<<2 | tagCopy2
|
||||
dst[i+1] = uint8(offset)
|
||||
dst[i+2] = uint8(offset >> 8)
|
||||
i += 3
|
||||
length -= 64
|
||||
}
|
||||
if length > 64 {
|
||||
// Emit a length 60 copy, encoded as 3 bytes.
|
||||
dst[i+0] = 59<<2 | tagCopy2
|
||||
dst[i+1] = uint8(offset)
|
||||
dst[i+2] = uint8(offset >> 8)
|
||||
i += 3
|
||||
length -= 60
|
||||
}
|
||||
if length >= 12 || offset >= 2048 {
|
||||
// Emit the remaining copy, encoded as 3 bytes.
|
||||
dst[i+0] = uint8(length-1)<<2 | tagCopy2
|
||||
dst[i+1] = uint8(offset)
|
||||
dst[i+2] = uint8(offset >> 8)
|
||||
return i + 3
|
||||
}
|
||||
// Emit the remaining copy, encoded as 2 bytes.
|
||||
dst[i+0] = uint8(offset>>8)<<5 | uint8(length-4)<<2 | tagCopy1
|
||||
dst[i+1] = uint8(offset)
|
||||
return i + 2
|
||||
}
|
||||
|
||||
// extendMatch returns the largest k such that k <= len(src) and that
|
||||
// src[i:i+k-j] and src[j:k] have the same contents.
|
||||
//
|
||||
// It assumes that:
|
||||
// 0 <= i && i < j && j <= len(src)
|
||||
func extendMatch(src []byte, i, j int) int {
|
||||
for ; j < len(src) && src[i] == src[j]; i, j = i+1, j+1 {
|
||||
}
|
||||
return j
|
||||
}
|
||||
|
||||
func hash(u, shift uint32) uint32 {
|
||||
return (u * 0x1e35a7bd) >> shift
|
||||
}
|
||||
|
||||
// encodeBlock encodes a non-empty src to a guaranteed-large-enough dst. It
|
||||
// assumes that the varint-encoded length of the decompressed bytes has already
|
||||
// been written.
|
||||
//
|
||||
// It also assumes that:
|
||||
// len(dst) >= MaxEncodedLen(len(src)) &&
|
||||
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
|
||||
func encodeBlock(dst, src []byte) (d int) {
|
||||
// Initialize the hash table. Its size ranges from 1<<8 to 1<<14 inclusive.
|
||||
// The table element type is uint16, as s < sLimit and sLimit < len(src)
|
||||
// and len(src) <= maxBlockSize and maxBlockSize == 65536.
|
||||
const (
|
||||
maxTableSize = 1 << 14
|
||||
// tableMask is redundant, but helps the compiler eliminate bounds
|
||||
// checks.
|
||||
tableMask = maxTableSize - 1
|
||||
)
|
||||
shift := uint32(32 - 8)
|
||||
for tableSize := 1 << 8; tableSize < maxTableSize && tableSize < len(src); tableSize *= 2 {
|
||||
shift--
|
||||
}
|
||||
// In Go, all array elements are zero-initialized, so there is no advantage
|
||||
// to a smaller tableSize per se. However, it matches the C++ algorithm,
|
||||
// and in the asm versions of this code, we can get away with zeroing only
|
||||
// the first tableSize elements.
|
||||
var table [maxTableSize]uint16
|
||||
|
||||
// sLimit is when to stop looking for offset/length copies. The inputMargin
|
||||
// lets us use a fast path for emitLiteral in the main loop, while we are
|
||||
// looking for copies.
|
||||
sLimit := len(src) - inputMargin
|
||||
|
||||
// nextEmit is where in src the next emitLiteral should start from.
|
||||
nextEmit := 0
|
||||
|
||||
// The encoded form must start with a literal, as there are no previous
|
||||
// bytes to copy, so we start looking for hash matches at s == 1.
|
||||
s := 1
|
||||
nextHash := hash(load32(src, s), shift)
|
||||
|
||||
for {
|
||||
// Copied from the C++ snappy implementation:
|
||||
//
|
||||
// Heuristic match skipping: If 32 bytes are scanned with no matches
|
||||
// found, start looking only at every other byte. If 32 more bytes are
|
||||
// scanned (or skipped), look at every third byte, etc.. When a match
|
||||
// is found, immediately go back to looking at every byte. This is a
|
||||
// small loss (~5% performance, ~0.1% density) for compressible data
|
||||
// due to more bookkeeping, but for non-compressible data (such as
|
||||
// JPEG) it's a huge win since the compressor quickly "realizes" the
|
||||
// data is incompressible and doesn't bother looking for matches
|
||||
// everywhere.
|
||||
//
|
||||
// The "skip" variable keeps track of how many bytes there are since
|
||||
// the last match; dividing it by 32 (ie. right-shifting by five) gives
|
||||
// the number of bytes to move ahead for each iteration.
|
||||
skip := 32
|
||||
|
||||
nextS := s
|
||||
candidate := 0
|
||||
for {
|
||||
s = nextS
|
||||
bytesBetweenHashLookups := skip >> 5
|
||||
nextS = s + bytesBetweenHashLookups
|
||||
skip += bytesBetweenHashLookups
|
||||
if nextS > sLimit {
|
||||
goto emitRemainder
|
||||
}
|
||||
candidate = int(table[nextHash&tableMask])
|
||||
table[nextHash&tableMask] = uint16(s)
|
||||
nextHash = hash(load32(src, nextS), shift)
|
||||
if load32(src, s) == load32(src, candidate) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// A 4-byte match has been found. We'll later see if more than 4 bytes
|
||||
// match. But, prior to the match, src[nextEmit:s] are unmatched. Emit
|
||||
// them as literal bytes.
|
||||
d += emitLiteral(dst[d:], src[nextEmit:s])
|
||||
|
||||
// Call emitCopy, and then see if another emitCopy could be our next
|
||||
// move. Repeat until we find no match for the input immediately after
|
||||
// what was consumed by the last emitCopy call.
|
||||
//
|
||||
// If we exit this loop normally then we need to call emitLiteral next,
|
||||
// though we don't yet know how big the literal will be. We handle that
|
||||
// by proceeding to the next iteration of the main loop. We also can
|
||||
// exit this loop via goto if we get close to exhausting the input.
|
||||
for {
|
||||
// Invariant: we have a 4-byte match at s, and no need to emit any
|
||||
// literal bytes prior to s.
|
||||
base := s
|
||||
|
||||
// Extend the 4-byte match as long as possible.
|
||||
//
|
||||
// This is an inlined version of:
|
||||
// s = extendMatch(src, candidate+4, s+4)
|
||||
s += 4
|
||||
for i := candidate + 4; s < len(src) && src[i] == src[s]; i, s = i+1, s+1 {
|
||||
}
|
||||
|
||||
d += emitCopy(dst[d:], base-candidate, s-base)
|
||||
nextEmit = s
|
||||
if s >= sLimit {
|
||||
goto emitRemainder
|
||||
}
|
||||
|
||||
// We could immediately start working at s now, but to improve
|
||||
// compression we first update the hash table at s-1 and at s. If
|
||||
// another emitCopy is not our next move, also calculate nextHash
|
||||
// at s+1. At least on GOARCH=amd64, these three hash calculations
|
||||
// are faster as one load64 call (with some shifts) instead of
|
||||
// three load32 calls.
|
||||
x := load64(src, s-1)
|
||||
prevHash := hash(uint32(x>>0), shift)
|
||||
table[prevHash&tableMask] = uint16(s - 1)
|
||||
currHash := hash(uint32(x>>8), shift)
|
||||
candidate = int(table[currHash&tableMask])
|
||||
table[currHash&tableMask] = uint16(s)
|
||||
if uint32(x>>8) != load32(src, candidate) {
|
||||
nextHash = hash(uint32(x>>16), shift)
|
||||
s++
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
emitRemainder:
|
||||
if nextEmit < len(src) {
|
||||
d += emitLiteral(dst[d:], src[nextEmit:])
|
||||
}
|
||||
return d
|
||||
}
|
||||
87
vendor/github.com/golang/snappy/snappy.go
generated
vendored
Normal file
87
vendor/github.com/golang/snappy/snappy.go
generated
vendored
Normal file
@ -0,0 +1,87 @@
|
||||
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package snappy implements the snappy block-based compression format.
|
||||
// It aims for very high speeds and reasonable compression.
|
||||
//
|
||||
// The C++ snappy implementation is at https://github.com/google/snappy
|
||||
package snappy
|
||||
|
||||
import (
|
||||
"hash/crc32"
|
||||
)
|
||||
|
||||
/*
|
||||
Each encoded block begins with the varint-encoded length of the decoded data,
|
||||
followed by a sequence of chunks. Chunks begin and end on byte boundaries. The
|
||||
first byte of each chunk is broken into its 2 least and 6 most significant bits
|
||||
called l and m: l ranges in [0, 4) and m ranges in [0, 64). l is the chunk tag.
|
||||
Zero means a literal tag. All other values mean a copy tag.
|
||||
|
||||
For literal tags:
|
||||
- If m < 60, the next 1 + m bytes are literal bytes.
|
||||
- Otherwise, let n be the little-endian unsigned integer denoted by the next
|
||||
m - 59 bytes. The next 1 + n bytes after that are literal bytes.
|
||||
|
||||
For copy tags, length bytes are copied from offset bytes ago, in the style of
|
||||
Lempel-Ziv compression algorithms. In particular:
|
||||
- For l == 1, the offset ranges in [0, 1<<11) and the length in [4, 12).
|
||||
The length is 4 + the low 3 bits of m. The high 3 bits of m form bits 8-10
|
||||
of the offset. The next byte is bits 0-7 of the offset.
|
||||
- For l == 2, the offset ranges in [0, 1<<16) and the length in [1, 65).
|
||||
The length is 1 + m. The offset is the little-endian unsigned integer
|
||||
denoted by the next 2 bytes.
|
||||
- For l == 3, this tag is a legacy format that is no longer issued by most
|
||||
encoders. Nonetheless, the offset ranges in [0, 1<<32) and the length in
|
||||
[1, 65). The length is 1 + m. The offset is the little-endian unsigned
|
||||
integer denoted by the next 4 bytes.
|
||||
*/
|
||||
const (
|
||||
tagLiteral = 0x00
|
||||
tagCopy1 = 0x01
|
||||
tagCopy2 = 0x02
|
||||
tagCopy4 = 0x03
|
||||
)
|
||||
|
||||
const (
|
||||
checksumSize = 4
|
||||
chunkHeaderSize = 4
|
||||
magicChunk = "\xff\x06\x00\x00" + magicBody
|
||||
magicBody = "sNaPpY"
|
||||
|
||||
// maxBlockSize is the maximum size of the input to encodeBlock. It is not
|
||||
// part of the wire format per se, but some parts of the encoder assume
|
||||
// that an offset fits into a uint16.
|
||||
//
|
||||
// Also, for the framing format (Writer type instead of Encode function),
|
||||
// https://github.com/google/snappy/blob/master/framing_format.txt says
|
||||
// that "the uncompressed data in a chunk must be no longer than 65536
|
||||
// bytes".
|
||||
maxBlockSize = 65536
|
||||
|
||||
// maxEncodedLenOfMaxBlockSize equals MaxEncodedLen(maxBlockSize), but is
|
||||
// hard coded to be a const instead of a variable, so that obufLen can also
|
||||
// be a const. Their equivalence is confirmed by
|
||||
// TestMaxEncodedLenOfMaxBlockSize.
|
||||
maxEncodedLenOfMaxBlockSize = 76490
|
||||
|
||||
obufHeaderLen = len(magicChunk) + checksumSize + chunkHeaderSize
|
||||
obufLen = obufHeaderLen + maxEncodedLenOfMaxBlockSize
|
||||
)
|
||||
|
||||
const (
|
||||
chunkTypeCompressedData = 0x00
|
||||
chunkTypeUncompressedData = 0x01
|
||||
chunkTypePadding = 0xfe
|
||||
chunkTypeStreamIdentifier = 0xff
|
||||
)
|
||||
|
||||
var crcTable = crc32.MakeTable(crc32.Castagnoli)
|
||||
|
||||
// crc implements the checksum specified in section 3 of
|
||||
// https://github.com/google/snappy/blob/master/framing_format.txt
|
||||
func crc(b []byte) uint32 {
|
||||
c := crc32.Update(0, crcTable, b)
|
||||
return uint32(c>>15|c<<17) + 0xa282ead8
|
||||
}
|
||||
1
vendor/github.com/miekg/dns/AUTHORS
generated
vendored
Normal file
1
vendor/github.com/miekg/dns/AUTHORS
generated
vendored
Normal file
@ -0,0 +1 @@
|
||||
Miek Gieben <miek@miek.nl>
|
||||
10
vendor/github.com/miekg/dns/CONTRIBUTORS
generated
vendored
Normal file
10
vendor/github.com/miekg/dns/CONTRIBUTORS
generated
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
Alex A. Skinner
|
||||
Andrew Tunnell-Jones
|
||||
Ask Bjørn Hansen
|
||||
Dave Cheney
|
||||
Dusty Wilson
|
||||
Marek Majkowski
|
||||
Peter van Dijk
|
||||
Omri Bahumi
|
||||
Alex Sergeyev
|
||||
James Hartig
|
||||
9
vendor/github.com/miekg/dns/COPYRIGHT
generated
vendored
Normal file
9
vendor/github.com/miekg/dns/COPYRIGHT
generated
vendored
Normal file
@ -0,0 +1,9 @@
|
||||
Copyright 2009 The Go Authors. All rights reserved. Use of this source code
|
||||
is governed by a BSD-style license that can be found in the LICENSE file.
|
||||
Extensions of the original work are copyright (c) 2011 Miek Gieben
|
||||
|
||||
Copyright 2011 Miek Gieben. All rights reserved. Use of this source code is
|
||||
governed by a BSD-style license that can be found in the LICENSE file.
|
||||
|
||||
Copyright 2014 CloudFlare. All rights reserved. Use of this source code is
|
||||
governed by a BSD-style license that can be found in the LICENSE file.
|
||||
21
vendor/github.com/miekg/dns/Gopkg.lock
generated
vendored
Normal file
21
vendor/github.com/miekg/dns/Gopkg.lock
generated
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
|
||||
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/crypto"
|
||||
packages = ["ed25519","ed25519/internal/edwards25519"]
|
||||
revision = "b080dc9a8c480b08e698fb1219160d598526310f"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/net"
|
||||
packages = ["bpf","internal/iana","internal/socket","ipv4","ipv6"]
|
||||
revision = "894f8ed5849b15b810ae41e9590a0d05395bba27"
|
||||
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
analyzer-version = 1
|
||||
inputs-digest = "c4abc38abaeeeeb9be92455c9c02cae32841122b8982aaa067ef25bb8e86ff9d"
|
||||
solver-name = "gps-cdcl"
|
||||
solver-version = 1
|
||||
26
vendor/github.com/miekg/dns/Gopkg.toml
generated
vendored
Normal file
26
vendor/github.com/miekg/dns/Gopkg.toml
generated
vendored
Normal file
@ -0,0 +1,26 @@
|
||||
|
||||
# Gopkg.toml example
|
||||
#
|
||||
# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
|
||||
# for detailed Gopkg.toml documentation.
|
||||
#
|
||||
# required = ["github.com/user/thing/cmd/thing"]
|
||||
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project"
|
||||
# version = "1.0.0"
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project2"
|
||||
# branch = "dev"
|
||||
# source = "github.com/myfork/project2"
|
||||
#
|
||||
# [[override]]
|
||||
# name = "github.com/x/y"
|
||||
# version = "2.4.0"
|
||||
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/crypto"
|
||||
32
vendor/github.com/miekg/dns/LICENSE
generated
vendored
Normal file
32
vendor/github.com/miekg/dns/LICENSE
generated
vendored
Normal file
@ -0,0 +1,32 @@
|
||||
Extensions of the original work are copyright (c) 2011 Miek Gieben
|
||||
|
||||
As this is fork of the official Go code the same license applies:
|
||||
|
||||
Copyright (c) 2009 The Go Authors. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
33
vendor/github.com/miekg/dns/Makefile.fuzz
generated
vendored
Normal file
33
vendor/github.com/miekg/dns/Makefile.fuzz
generated
vendored
Normal file
@ -0,0 +1,33 @@
|
||||
# Makefile for fuzzing
|
||||
#
|
||||
# Use go-fuzz and needs the tools installed.
|
||||
# See https://blog.cloudflare.com/dns-parser-meet-go-fuzzer/
|
||||
#
|
||||
# Installing go-fuzz:
|
||||
# $ make -f Makefile.fuzz get
|
||||
# Installs:
|
||||
# * github.com/dvyukov/go-fuzz/go-fuzz
|
||||
# * get github.com/dvyukov/go-fuzz/go-fuzz-build
|
||||
|
||||
all: build
|
||||
|
||||
.PHONY: build
|
||||
build:
|
||||
go-fuzz-build -tags fuzz github.com/miekg/dns
|
||||
|
||||
.PHONY: build-newrr
|
||||
build-newrr:
|
||||
go-fuzz-build -func FuzzNewRR -tags fuzz github.com/miekg/dns
|
||||
|
||||
.PHONY: fuzz
|
||||
fuzz:
|
||||
go-fuzz -bin=dns-fuzz.zip -workdir=fuzz
|
||||
|
||||
.PHONY: get
|
||||
get:
|
||||
go get github.com/dvyukov/go-fuzz/go-fuzz
|
||||
go get github.com/dvyukov/go-fuzz/go-fuzz-build
|
||||
|
||||
.PHONY: clean
|
||||
clean:
|
||||
rm *-fuzz.zip
|
||||
52
vendor/github.com/miekg/dns/Makefile.release
generated
vendored
Normal file
52
vendor/github.com/miekg/dns/Makefile.release
generated
vendored
Normal file
@ -0,0 +1,52 @@
|
||||
# Makefile for releasing.
|
||||
#
|
||||
# The release is controlled from version.go. The version found there is
|
||||
# used to tag the git repo, we're not building any artifects so there is nothing
|
||||
# to upload to github.
|
||||
#
|
||||
# * Up the version in version.go
|
||||
# * Run: make -f Makefile.release release
|
||||
# * will *commit* your change with 'Release $VERSION'
|
||||
# * push to github
|
||||
#
|
||||
|
||||
define GO
|
||||
//+build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/miekg/dns"
|
||||
)
|
||||
|
||||
func main() {
|
||||
fmt.Println(dns.Version.String())
|
||||
}
|
||||
endef
|
||||
|
||||
$(file > version_release.go,$(GO))
|
||||
VERSION:=$(shell go run version_release.go)
|
||||
TAG="v$(VERSION)"
|
||||
|
||||
all:
|
||||
@echo Use the \'release\' target to start a release $(VERSION)
|
||||
rm -f version_release.go
|
||||
|
||||
.PHONY: release
|
||||
release: commit push
|
||||
@echo Released $(VERSION)
|
||||
rm -f version_release.go
|
||||
|
||||
.PHONY: commit
|
||||
commit:
|
||||
@echo Committing release $(VERSION)
|
||||
git commit -am"Release $(VERSION)"
|
||||
git tag $(TAG)
|
||||
|
||||
.PHONY: push
|
||||
push:
|
||||
@echo Pushing release $(VERSION) to master
|
||||
git push --tags
|
||||
git push
|
||||
168
vendor/github.com/miekg/dns/README.md
generated
vendored
Normal file
168
vendor/github.com/miekg/dns/README.md
generated
vendored
Normal file
@ -0,0 +1,168 @@
|
||||
[](https://travis-ci.org/miekg/dns)
|
||||
[](https://codecov.io/github/miekg/dns?branch=master)
|
||||
[](https://goreportcard.com/report/miekg/dns)
|
||||
[](https://godoc.org/github.com/miekg/dns)
|
||||
|
||||
# Alternative (more granular) approach to a DNS library
|
||||
|
||||
> Less is more.
|
||||
|
||||
Complete and usable DNS library. All widely used Resource Records are supported, including the
|
||||
DNSSEC types. It follows a lean and mean philosophy. If there is stuff you should know as a DNS
|
||||
programmer there isn't a convenience function for it. Server side and client side programming is
|
||||
supported, i.e. you can build servers and resolvers with it.
|
||||
|
||||
We try to keep the "master" branch as sane as possible and at the bleeding edge of standards,
|
||||
avoiding breaking changes wherever reasonable. We support the last two versions of Go.
|
||||
|
||||
# Goals
|
||||
|
||||
* KISS;
|
||||
* Fast;
|
||||
* Small API. If it's easy to code in Go, don't make a function for it.
|
||||
|
||||
# Users
|
||||
|
||||
A not-so-up-to-date-list-that-may-be-actually-current:
|
||||
|
||||
* https://github.com/coredns/coredns
|
||||
* https://cloudflare.com
|
||||
* https://github.com/abh/geodns
|
||||
* http://www.statdns.com/
|
||||
* http://www.dnsinspect.com/
|
||||
* https://github.com/chuangbo/jianbing-dictionary-dns
|
||||
* http://www.dns-lg.com/
|
||||
* https://github.com/fcambus/rrda
|
||||
* https://github.com/kenshinx/godns
|
||||
* https://github.com/skynetservices/skydns
|
||||
* https://github.com/hashicorp/consul
|
||||
* https://github.com/DevelopersPL/godnsagent
|
||||
* https://github.com/duedil-ltd/discodns
|
||||
* https://github.com/StalkR/dns-reverse-proxy
|
||||
* https://github.com/tianon/rawdns
|
||||
* https://mesosphere.github.io/mesos-dns/
|
||||
* https://pulse.turbobytes.com/
|
||||
* https://play.google.com/store/apps/details?id=com.turbobytes.dig
|
||||
* https://github.com/fcambus/statzone
|
||||
* https://github.com/benschw/dns-clb-go
|
||||
* https://github.com/corny/dnscheck for http://public-dns.info/
|
||||
* https://namesmith.io
|
||||
* https://github.com/miekg/unbound
|
||||
* https://github.com/miekg/exdns
|
||||
* https://dnslookup.org
|
||||
* https://github.com/looterz/grimd
|
||||
* https://github.com/phamhongviet/serf-dns
|
||||
* https://github.com/mehrdadrad/mylg
|
||||
* https://github.com/bamarni/dockness
|
||||
* https://github.com/fffaraz/microdns
|
||||
* http://kelda.io
|
||||
* https://github.com/ipdcode/hades (JD.COM)
|
||||
* https://github.com/StackExchange/dnscontrol/
|
||||
* https://www.dnsperf.com/
|
||||
* https://dnssectest.net/
|
||||
* https://dns.apebits.com
|
||||
* https://github.com/oif/apex
|
||||
* https://github.com/jedisct1/dnscrypt-proxy
|
||||
* https://github.com/jedisct1/rpdns
|
||||
|
||||
Send pull request if you want to be listed here.
|
||||
|
||||
# Features
|
||||
|
||||
* UDP/TCP queries, IPv4 and IPv6;
|
||||
* RFC 1035 zone file parsing ($INCLUDE, $ORIGIN, $TTL and $GENERATE (for all record types) are supported;
|
||||
* Fast:
|
||||
* Reply speed around ~ 80K qps (faster hardware results in more qps);
|
||||
* Parsing RRs ~ 100K RR/s, that's 5M records in about 50 seconds;
|
||||
* Server side programming (mimicking the net/http package);
|
||||
* Client side programming;
|
||||
* DNSSEC: signing, validating and key generation for DSA, RSA, ECDSA and Ed25519;
|
||||
* EDNS0, NSID, Cookies;
|
||||
* AXFR/IXFR;
|
||||
* TSIG, SIG(0);
|
||||
* DNS over TLS: optional encrypted connection between client and server;
|
||||
* DNS name compression;
|
||||
* Depends only on the standard library.
|
||||
|
||||
Have fun!
|
||||
|
||||
Miek Gieben - 2010-2012 - <miek@miek.nl>
|
||||
|
||||
# Building
|
||||
|
||||
Building is done with the `go` tool. If you have setup your GOPATH correctly, the following should
|
||||
work:
|
||||
|
||||
go get github.com/miekg/dns
|
||||
go build github.com/miekg/dns
|
||||
|
||||
## Examples
|
||||
|
||||
A short "how to use the API" is at the beginning of doc.go (this also will show
|
||||
when you call `godoc github.com/miekg/dns`).
|
||||
|
||||
Example programs can be found in the `github.com/miekg/exdns` repository.
|
||||
|
||||
## Supported RFCs
|
||||
|
||||
*all of them*
|
||||
|
||||
* 103{4,5} - DNS standard
|
||||
* 1348 - NSAP record (removed the record)
|
||||
* 1982 - Serial Arithmetic
|
||||
* 1876 - LOC record
|
||||
* 1995 - IXFR
|
||||
* 1996 - DNS notify
|
||||
* 2136 - DNS Update (dynamic updates)
|
||||
* 2181 - RRset definition - there is no RRset type though, just []RR
|
||||
* 2537 - RSAMD5 DNS keys
|
||||
* 2065 - DNSSEC (updated in later RFCs)
|
||||
* 2671 - EDNS record
|
||||
* 2782 - SRV record
|
||||
* 2845 - TSIG record
|
||||
* 2915 - NAPTR record
|
||||
* 2929 - DNS IANA Considerations
|
||||
* 3110 - RSASHA1 DNS keys
|
||||
* 3225 - DO bit (DNSSEC OK)
|
||||
* 340{1,2,3} - NAPTR record
|
||||
* 3445 - Limiting the scope of (DNS)KEY
|
||||
* 3597 - Unknown RRs
|
||||
* 403{3,4,5} - DNSSEC + validation functions
|
||||
* 4255 - SSHFP record
|
||||
* 4343 - Case insensitivity
|
||||
* 4408 - SPF record
|
||||
* 4509 - SHA256 Hash in DS
|
||||
* 4592 - Wildcards in the DNS
|
||||
* 4635 - HMAC SHA TSIG
|
||||
* 4701 - DHCID
|
||||
* 4892 - id.server
|
||||
* 5001 - NSID
|
||||
* 5155 - NSEC3 record
|
||||
* 5205 - HIP record
|
||||
* 5702 - SHA2 in the DNS
|
||||
* 5936 - AXFR
|
||||
* 5966 - TCP implementation recommendations
|
||||
* 6605 - ECDSA
|
||||
* 6725 - IANA Registry Update
|
||||
* 6742 - ILNP DNS
|
||||
* 6840 - Clarifications and Implementation Notes for DNS Security
|
||||
* 6844 - CAA record
|
||||
* 6891 - EDNS0 update
|
||||
* 6895 - DNS IANA considerations
|
||||
* 6975 - Algorithm Understanding in DNSSEC
|
||||
* 7043 - EUI48/EUI64 records
|
||||
* 7314 - DNS (EDNS) EXPIRE Option
|
||||
* 7477 - CSYNC RR
|
||||
* 7828 - edns-tcp-keepalive EDNS0 Option
|
||||
* 7553 - URI record
|
||||
* 7858 - DNS over TLS: Initiation and Performance Considerations
|
||||
* 7871 - EDNS0 Client Subnet
|
||||
* 7873 - Domain Name System (DNS) Cookies (draft-ietf-dnsop-cookies)
|
||||
* 8080 - EdDSA for DNSSEC
|
||||
|
||||
## Loosely based upon
|
||||
|
||||
* `ldns`
|
||||
* `NSD`
|
||||
* `Net::DNS`
|
||||
* `GRONG`
|
||||
506
vendor/github.com/miekg/dns/client.go
generated
vendored
Normal file
506
vendor/github.com/miekg/dns/client.go
generated
vendored
Normal file
@ -0,0 +1,506 @@
|
||||
package dns
|
||||
|
||||
// A client implementation.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"encoding/binary"
|
||||
"io"
|
||||
"net"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
const dnsTimeout time.Duration = 2 * time.Second
|
||||
const tcpIdleTimeout time.Duration = 8 * time.Second
|
||||
|
||||
// A Conn represents a connection to a DNS server.
|
||||
type Conn struct {
|
||||
net.Conn // a net.Conn holding the connection
|
||||
UDPSize uint16 // minimum receive buffer for UDP messages
|
||||
TsigSecret map[string]string // secret(s) for Tsig map[<zonename>]<base64 secret>, zonename must be in canonical form (lowercase, fqdn, see RFC 4034 Section 6.2)
|
||||
rtt time.Duration
|
||||
t time.Time
|
||||
tsigRequestMAC string
|
||||
}
|
||||
|
||||
// A Client defines parameters for a DNS client.
|
||||
type Client struct {
|
||||
Net string // if "tcp" or "tcp-tls" (DNS over TLS) a TCP query will be initiated, otherwise an UDP one (default is "" for UDP)
|
||||
UDPSize uint16 // minimum receive buffer for UDP messages
|
||||
TLSConfig *tls.Config // TLS connection configuration
|
||||
Dialer *net.Dialer // a net.Dialer used to set local address, timeouts and more
|
||||
// Timeout is a cumulative timeout for dial, write and read, defaults to 0 (disabled) - overrides DialTimeout, ReadTimeout,
|
||||
// WriteTimeout when non-zero. Can be overridden with net.Dialer.Timeout (see Client.ExchangeWithDialer and
|
||||
// Client.Dialer) or context.Context.Deadline (see the deprecated ExchangeContext)
|
||||
Timeout time.Duration
|
||||
DialTimeout time.Duration // net.DialTimeout, defaults to 2 seconds, or net.Dialer.Timeout if expiring earlier - overridden by Timeout when that value is non-zero
|
||||
ReadTimeout time.Duration // net.Conn.SetReadTimeout value for connections, defaults to 2 seconds - overridden by Timeout when that value is non-zero
|
||||
WriteTimeout time.Duration // net.Conn.SetWriteTimeout value for connections, defaults to 2 seconds - overridden by Timeout when that value is non-zero
|
||||
TsigSecret map[string]string // secret(s) for Tsig map[<zonename>]<base64 secret>, zonename must be in canonical form (lowercase, fqdn, see RFC 4034 Section 6.2)
|
||||
SingleInflight bool // if true suppress multiple outstanding queries for the same Qname, Qtype and Qclass
|
||||
group singleflight
|
||||
}
|
||||
|
||||
// Exchange performs a synchronous UDP query. It sends the message m to the address
|
||||
// contained in a and waits for a reply. Exchange does not retry a failed query, nor
|
||||
// will it fall back to TCP in case of truncation.
|
||||
// See client.Exchange for more information on setting larger buffer sizes.
|
||||
func Exchange(m *Msg, a string) (r *Msg, err error) {
|
||||
client := Client{Net: "udp"}
|
||||
r, _, err = client.Exchange(m, a)
|
||||
return r, err
|
||||
}
|
||||
|
||||
func (c *Client) dialTimeout() time.Duration {
|
||||
if c.Timeout != 0 {
|
||||
return c.Timeout
|
||||
}
|
||||
if c.DialTimeout != 0 {
|
||||
return c.DialTimeout
|
||||
}
|
||||
return dnsTimeout
|
||||
}
|
||||
|
||||
func (c *Client) readTimeout() time.Duration {
|
||||
if c.ReadTimeout != 0 {
|
||||
return c.ReadTimeout
|
||||
}
|
||||
return dnsTimeout
|
||||
}
|
||||
|
||||
func (c *Client) writeTimeout() time.Duration {
|
||||
if c.WriteTimeout != 0 {
|
||||
return c.WriteTimeout
|
||||
}
|
||||
return dnsTimeout
|
||||
}
|
||||
|
||||
// Dial connects to the address on the named network.
|
||||
func (c *Client) Dial(address string) (conn *Conn, err error) {
|
||||
// create a new dialer with the appropriate timeout
|
||||
var d net.Dialer
|
||||
if c.Dialer == nil {
|
||||
d = net.Dialer{}
|
||||
} else {
|
||||
d = net.Dialer(*c.Dialer)
|
||||
}
|
||||
d.Timeout = c.getTimeoutForRequest(c.writeTimeout())
|
||||
|
||||
network := "udp"
|
||||
useTLS := false
|
||||
|
||||
switch c.Net {
|
||||
case "tcp-tls":
|
||||
network = "tcp"
|
||||
useTLS = true
|
||||
case "tcp4-tls":
|
||||
network = "tcp4"
|
||||
useTLS = true
|
||||
case "tcp6-tls":
|
||||
network = "tcp6"
|
||||
useTLS = true
|
||||
default:
|
||||
if c.Net != "" {
|
||||
network = c.Net
|
||||
}
|
||||
}
|
||||
|
||||
conn = new(Conn)
|
||||
if useTLS {
|
||||
conn.Conn, err = tls.DialWithDialer(&d, network, address, c.TLSConfig)
|
||||
} else {
|
||||
conn.Conn, err = d.Dial(network, address)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
// Exchange performs a synchronous query. It sends the message m to the address
|
||||
// contained in a and waits for a reply. Basic use pattern with a *dns.Client:
|
||||
//
|
||||
// c := new(dns.Client)
|
||||
// in, rtt, err := c.Exchange(message, "127.0.0.1:53")
|
||||
//
|
||||
// Exchange does not retry a failed query, nor will it fall back to TCP in
|
||||
// case of truncation.
|
||||
// It is up to the caller to create a message that allows for larger responses to be
|
||||
// returned. Specifically this means adding an EDNS0 OPT RR that will advertise a larger
|
||||
// buffer, see SetEdns0. Messages without an OPT RR will fallback to the historic limit
|
||||
// of 512 bytes
|
||||
// To specify a local address or a timeout, the caller has to set the `Client.Dialer`
|
||||
// attribute appropriately
|
||||
func (c *Client) Exchange(m *Msg, address string) (r *Msg, rtt time.Duration, err error) {
|
||||
if !c.SingleInflight {
|
||||
return c.exchange(m, address)
|
||||
}
|
||||
|
||||
t := "nop"
|
||||
if t1, ok := TypeToString[m.Question[0].Qtype]; ok {
|
||||
t = t1
|
||||
}
|
||||
cl := "nop"
|
||||
if cl1, ok := ClassToString[m.Question[0].Qclass]; ok {
|
||||
cl = cl1
|
||||
}
|
||||
r, rtt, err, shared := c.group.Do(m.Question[0].Name+t+cl, func() (*Msg, time.Duration, error) {
|
||||
return c.exchange(m, address)
|
||||
})
|
||||
if r != nil && shared {
|
||||
r = r.Copy()
|
||||
}
|
||||
return r, rtt, err
|
||||
}
|
||||
|
||||
func (c *Client) exchange(m *Msg, a string) (r *Msg, rtt time.Duration, err error) {
|
||||
var co *Conn
|
||||
|
||||
co, err = c.Dial(a)
|
||||
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
defer co.Close()
|
||||
|
||||
opt := m.IsEdns0()
|
||||
// If EDNS0 is used use that for size.
|
||||
if opt != nil && opt.UDPSize() >= MinMsgSize {
|
||||
co.UDPSize = opt.UDPSize()
|
||||
}
|
||||
// Otherwise use the client's configured UDP size.
|
||||
if opt == nil && c.UDPSize >= MinMsgSize {
|
||||
co.UDPSize = c.UDPSize
|
||||
}
|
||||
|
||||
co.TsigSecret = c.TsigSecret
|
||||
// write with the appropriate write timeout
|
||||
co.SetWriteDeadline(time.Now().Add(c.getTimeoutForRequest(c.writeTimeout())))
|
||||
if err = co.WriteMsg(m); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
co.SetReadDeadline(time.Now().Add(c.getTimeoutForRequest(c.readTimeout())))
|
||||
r, err = co.ReadMsg()
|
||||
if err == nil && r.Id != m.Id {
|
||||
err = ErrId
|
||||
}
|
||||
return r, co.rtt, err
|
||||
}
|
||||
|
||||
// ReadMsg reads a message from the connection co.
|
||||
// If the received message contains a TSIG record the transaction signature
|
||||
// is verified. This method always tries to return the message, however if an
|
||||
// error is returned there are no guarantees that the returned message is a
|
||||
// valid representation of the packet read.
|
||||
func (co *Conn) ReadMsg() (*Msg, error) {
|
||||
p, err := co.ReadMsgHeader(nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
m := new(Msg)
|
||||
if err := m.Unpack(p); err != nil {
|
||||
// If an error was returned, we still want to allow the user to use
|
||||
// the message, but naively they can just check err if they don't want
|
||||
// to use an erroneous message
|
||||
return m, err
|
||||
}
|
||||
if t := m.IsTsig(); t != nil {
|
||||
if _, ok := co.TsigSecret[t.Hdr.Name]; !ok {
|
||||
return m, ErrSecret
|
||||
}
|
||||
// Need to work on the original message p, as that was used to calculate the tsig.
|
||||
err = TsigVerify(p, co.TsigSecret[t.Hdr.Name], co.tsigRequestMAC, false)
|
||||
}
|
||||
return m, err
|
||||
}
|
||||
|
||||
// ReadMsgHeader reads a DNS message, parses and populates hdr (when hdr is not nil).
|
||||
// Returns message as a byte slice to be parsed with Msg.Unpack later on.
|
||||
// Note that error handling on the message body is not possible as only the header is parsed.
|
||||
func (co *Conn) ReadMsgHeader(hdr *Header) ([]byte, error) {
|
||||
var (
|
||||
p []byte
|
||||
n int
|
||||
err error
|
||||
)
|
||||
|
||||
switch t := co.Conn.(type) {
|
||||
case *net.TCPConn, *tls.Conn:
|
||||
r := t.(io.Reader)
|
||||
|
||||
// First two bytes specify the length of the entire message.
|
||||
l, err := tcpMsgLen(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p = make([]byte, l)
|
||||
n, err = tcpRead(r, p)
|
||||
co.rtt = time.Since(co.t)
|
||||
default:
|
||||
if co.UDPSize > MinMsgSize {
|
||||
p = make([]byte, co.UDPSize)
|
||||
} else {
|
||||
p = make([]byte, MinMsgSize)
|
||||
}
|
||||
n, err = co.Read(p)
|
||||
co.rtt = time.Since(co.t)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
} else if n < headerSize {
|
||||
return nil, ErrShortRead
|
||||
}
|
||||
|
||||
p = p[:n]
|
||||
if hdr != nil {
|
||||
dh, _, err := unpackMsgHdr(p, 0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
*hdr = dh
|
||||
}
|
||||
return p, err
|
||||
}
|
||||
|
||||
// tcpMsgLen is a helper func to read first two bytes of stream as uint16 packet length.
|
||||
func tcpMsgLen(t io.Reader) (int, error) {
|
||||
p := []byte{0, 0}
|
||||
n, err := t.Read(p)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// As seen with my local router/switch, returns 1 byte on the above read,
|
||||
// resulting a a ShortRead. Just write it out (instead of loop) and read the
|
||||
// other byte.
|
||||
if n == 1 {
|
||||
n1, err := t.Read(p[1:])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
n += n1
|
||||
}
|
||||
|
||||
if n != 2 {
|
||||
return 0, ErrShortRead
|
||||
}
|
||||
l := binary.BigEndian.Uint16(p)
|
||||
if l == 0 {
|
||||
return 0, ErrShortRead
|
||||
}
|
||||
return int(l), nil
|
||||
}
|
||||
|
||||
// tcpRead calls TCPConn.Read enough times to fill allocated buffer.
|
||||
func tcpRead(t io.Reader, p []byte) (int, error) {
|
||||
n, err := t.Read(p)
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
for n < len(p) {
|
||||
j, err := t.Read(p[n:])
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
n += j
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
// Read implements the net.Conn read method.
|
||||
func (co *Conn) Read(p []byte) (n int, err error) {
|
||||
if co.Conn == nil {
|
||||
return 0, ErrConnEmpty
|
||||
}
|
||||
if len(p) < 2 {
|
||||
return 0, io.ErrShortBuffer
|
||||
}
|
||||
switch t := co.Conn.(type) {
|
||||
case *net.TCPConn, *tls.Conn:
|
||||
r := t.(io.Reader)
|
||||
|
||||
l, err := tcpMsgLen(r)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if l > len(p) {
|
||||
return int(l), io.ErrShortBuffer
|
||||
}
|
||||
return tcpRead(r, p[:l])
|
||||
}
|
||||
// UDP connection
|
||||
n, err = co.Conn.Read(p)
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
// WriteMsg sends a message through the connection co.
|
||||
// If the message m contains a TSIG record the transaction
|
||||
// signature is calculated.
|
||||
func (co *Conn) WriteMsg(m *Msg) (err error) {
|
||||
var out []byte
|
||||
if t := m.IsTsig(); t != nil {
|
||||
mac := ""
|
||||
if _, ok := co.TsigSecret[t.Hdr.Name]; !ok {
|
||||
return ErrSecret
|
||||
}
|
||||
out, mac, err = TsigGenerate(m, co.TsigSecret[t.Hdr.Name], co.tsigRequestMAC, false)
|
||||
// Set for the next read, although only used in zone transfers
|
||||
co.tsigRequestMAC = mac
|
||||
} else {
|
||||
out, err = m.Pack()
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
co.t = time.Now()
|
||||
if _, err = co.Write(out); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Write implements the net.Conn Write method.
|
||||
func (co *Conn) Write(p []byte) (n int, err error) {
|
||||
switch t := co.Conn.(type) {
|
||||
case *net.TCPConn, *tls.Conn:
|
||||
w := t.(io.Writer)
|
||||
|
||||
lp := len(p)
|
||||
if lp < 2 {
|
||||
return 0, io.ErrShortBuffer
|
||||
}
|
||||
if lp > MaxMsgSize {
|
||||
return 0, &Error{err: "message too large"}
|
||||
}
|
||||
l := make([]byte, 2, lp+2)
|
||||
binary.BigEndian.PutUint16(l, uint16(lp))
|
||||
p = append(l, p...)
|
||||
n, err := io.Copy(w, bytes.NewReader(p))
|
||||
return int(n), err
|
||||
}
|
||||
n, err = co.Conn.Write(p)
|
||||
return n, err
|
||||
}
|
||||
|
||||
// Return the appropriate timeout for a specific request
|
||||
func (c *Client) getTimeoutForRequest(timeout time.Duration) time.Duration {
|
||||
var requestTimeout time.Duration
|
||||
if c.Timeout != 0 {
|
||||
requestTimeout = c.Timeout
|
||||
} else {
|
||||
requestTimeout = timeout
|
||||
}
|
||||
// net.Dialer.Timeout has priority if smaller than the timeouts computed so
|
||||
// far
|
||||
if c.Dialer != nil && c.Dialer.Timeout != 0 {
|
||||
if c.Dialer.Timeout < requestTimeout {
|
||||
requestTimeout = c.Dialer.Timeout
|
||||
}
|
||||
}
|
||||
return requestTimeout
|
||||
}
|
||||
|
||||
// Dial connects to the address on the named network.
|
||||
func Dial(network, address string) (conn *Conn, err error) {
|
||||
conn = new(Conn)
|
||||
conn.Conn, err = net.Dial(network, address)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
// ExchangeContext performs a synchronous UDP query, like Exchange. It
|
||||
// additionally obeys deadlines from the passed Context.
|
||||
func ExchangeContext(ctx context.Context, m *Msg, a string) (r *Msg, err error) {
|
||||
client := Client{Net: "udp"}
|
||||
r, _, err = client.ExchangeContext(ctx, m, a)
|
||||
// ignorint rtt to leave the original ExchangeContext API unchanged, but
|
||||
// this function will go away
|
||||
return r, err
|
||||
}
|
||||
|
||||
// ExchangeConn performs a synchronous query. It sends the message m via the connection
|
||||
// c and waits for a reply. The connection c is not closed by ExchangeConn.
|
||||
// This function is going away, but can easily be mimicked:
|
||||
//
|
||||
// co := &dns.Conn{Conn: c} // c is your net.Conn
|
||||
// co.WriteMsg(m)
|
||||
// in, _ := co.ReadMsg()
|
||||
// co.Close()
|
||||
//
|
||||
func ExchangeConn(c net.Conn, m *Msg) (r *Msg, err error) {
|
||||
println("dns: ExchangeConn: this function is deprecated")
|
||||
co := new(Conn)
|
||||
co.Conn = c
|
||||
if err = co.WriteMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
r, err = co.ReadMsg()
|
||||
if err == nil && r.Id != m.Id {
|
||||
err = ErrId
|
||||
}
|
||||
return r, err
|
||||
}
|
||||
|
||||
// DialTimeout acts like Dial but takes a timeout.
|
||||
func DialTimeout(network, address string, timeout time.Duration) (conn *Conn, err error) {
|
||||
client := Client{Net: network, Dialer: &net.Dialer{Timeout: timeout}}
|
||||
conn, err = client.Dial(address)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
// DialWithTLS connects to the address on the named network with TLS.
|
||||
func DialWithTLS(network, address string, tlsConfig *tls.Config) (conn *Conn, err error) {
|
||||
if !strings.HasSuffix(network, "-tls") {
|
||||
network += "-tls"
|
||||
}
|
||||
client := Client{Net: network, TLSConfig: tlsConfig}
|
||||
conn, err = client.Dial(address)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
// DialTimeoutWithTLS acts like DialWithTLS but takes a timeout.
|
||||
func DialTimeoutWithTLS(network, address string, tlsConfig *tls.Config, timeout time.Duration) (conn *Conn, err error) {
|
||||
if !strings.HasSuffix(network, "-tls") {
|
||||
network += "-tls"
|
||||
}
|
||||
client := Client{Net: network, Dialer: &net.Dialer{Timeout: timeout}, TLSConfig: tlsConfig}
|
||||
conn, err = client.Dial(address)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
// ExchangeContext acts like Exchange, but honors the deadline on the provided
|
||||
// context, if present. If there is both a context deadline and a configured
|
||||
// timeout on the client, the earliest of the two takes effect.
|
||||
func (c *Client) ExchangeContext(ctx context.Context, m *Msg, a string) (r *Msg, rtt time.Duration, err error) {
|
||||
var timeout time.Duration
|
||||
if deadline, ok := ctx.Deadline(); !ok {
|
||||
timeout = 0
|
||||
} else {
|
||||
timeout = deadline.Sub(time.Now())
|
||||
}
|
||||
// not passing the context to the underlying calls, as the API does not support
|
||||
// context. For timeouts you should set up Client.Dialer and call Client.Exchange.
|
||||
c.Dialer = &net.Dialer{Timeout: timeout}
|
||||
return c.Exchange(m, a)
|
||||
}
|
||||
139
vendor/github.com/miekg/dns/clientconfig.go
generated
vendored
Normal file
139
vendor/github.com/miekg/dns/clientconfig.go
generated
vendored
Normal file
@ -0,0 +1,139 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"io"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ClientConfig wraps the contents of the /etc/resolv.conf file.
|
||||
type ClientConfig struct {
|
||||
Servers []string // servers to use
|
||||
Search []string // suffixes to append to local name
|
||||
Port string // what port to use
|
||||
Ndots int // number of dots in name to trigger absolute lookup
|
||||
Timeout int // seconds before giving up on packet
|
||||
Attempts int // lost packets before giving up on server, not used in the package dns
|
||||
}
|
||||
|
||||
// ClientConfigFromFile parses a resolv.conf(5) like file and returns
|
||||
// a *ClientConfig.
|
||||
func ClientConfigFromFile(resolvconf string) (*ClientConfig, error) {
|
||||
file, err := os.Open(resolvconf)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer file.Close()
|
||||
return ClientConfigFromReader(file)
|
||||
}
|
||||
|
||||
// ClientConfigFromReader works like ClientConfigFromFile but takes an io.Reader as argument
|
||||
func ClientConfigFromReader(resolvconf io.Reader) (*ClientConfig, error) {
|
||||
c := new(ClientConfig)
|
||||
scanner := bufio.NewScanner(resolvconf)
|
||||
c.Servers = make([]string, 0)
|
||||
c.Search = make([]string, 0)
|
||||
c.Port = "53"
|
||||
c.Ndots = 1
|
||||
c.Timeout = 5
|
||||
c.Attempts = 2
|
||||
|
||||
for scanner.Scan() {
|
||||
if err := scanner.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
line := scanner.Text()
|
||||
f := strings.Fields(line)
|
||||
if len(f) < 1 {
|
||||
continue
|
||||
}
|
||||
switch f[0] {
|
||||
case "nameserver": // add one name server
|
||||
if len(f) > 1 {
|
||||
// One more check: make sure server name is
|
||||
// just an IP address. Otherwise we need DNS
|
||||
// to look it up.
|
||||
name := f[1]
|
||||
c.Servers = append(c.Servers, name)
|
||||
}
|
||||
|
||||
case "domain": // set search path to just this domain
|
||||
if len(f) > 1 {
|
||||
c.Search = make([]string, 1)
|
||||
c.Search[0] = f[1]
|
||||
} else {
|
||||
c.Search = make([]string, 0)
|
||||
}
|
||||
|
||||
case "search": // set search path to given servers
|
||||
c.Search = make([]string, len(f)-1)
|
||||
for i := 0; i < len(c.Search); i++ {
|
||||
c.Search[i] = f[i+1]
|
||||
}
|
||||
|
||||
case "options": // magic options
|
||||
for i := 1; i < len(f); i++ {
|
||||
s := f[i]
|
||||
switch {
|
||||
case len(s) >= 6 && s[:6] == "ndots:":
|
||||
n, _ := strconv.Atoi(s[6:])
|
||||
if n < 0 {
|
||||
n = 0
|
||||
} else if n > 15 {
|
||||
n = 15
|
||||
}
|
||||
c.Ndots = n
|
||||
case len(s) >= 8 && s[:8] == "timeout:":
|
||||
n, _ := strconv.Atoi(s[8:])
|
||||
if n < 1 {
|
||||
n = 1
|
||||
}
|
||||
c.Timeout = n
|
||||
case len(s) >= 8 && s[:9] == "attempts:":
|
||||
n, _ := strconv.Atoi(s[9:])
|
||||
if n < 1 {
|
||||
n = 1
|
||||
}
|
||||
c.Attempts = n
|
||||
case s == "rotate":
|
||||
/* not imp */
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// NameList returns all of the names that should be queried based on the
|
||||
// config. It is based off of go's net/dns name building, but it does not
|
||||
// check the length of the resulting names.
|
||||
func (c *ClientConfig) NameList(name string) []string {
|
||||
// if this domain is already fully qualified, no append needed.
|
||||
if IsFqdn(name) {
|
||||
return []string{name}
|
||||
}
|
||||
|
||||
// Check to see if the name has more labels than Ndots. Do this before making
|
||||
// the domain fully qualified.
|
||||
hasNdots := CountLabel(name) > c.Ndots
|
||||
// Make the domain fully qualified.
|
||||
name = Fqdn(name)
|
||||
|
||||
// Make a list of names based off search.
|
||||
names := []string{}
|
||||
|
||||
// If name has enough dots, try that first.
|
||||
if hasNdots {
|
||||
names = append(names, name)
|
||||
}
|
||||
for _, s := range c.Search {
|
||||
names = append(names, Fqdn(name+s))
|
||||
}
|
||||
// If we didn't have enough dots, try after suffixes.
|
||||
if !hasNdots {
|
||||
names = append(names, name)
|
||||
}
|
||||
return names
|
||||
}
|
||||
188
vendor/github.com/miekg/dns/compress_generate.go
generated
vendored
Normal file
188
vendor/github.com/miekg/dns/compress_generate.go
generated
vendored
Normal file
@ -0,0 +1,188 @@
|
||||
//+build ignore
|
||||
|
||||
// compression_generate.go is meant to run with go generate. It will use
|
||||
// go/{importer,types} to track down all the RR struct types. Then for each type
|
||||
// it will look to see if there are (compressible) names, if so it will add that
|
||||
// type to compressionLenHelperType and comressionLenSearchType which "fake" the
|
||||
// compression so that Len() is fast.
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"go/format"
|
||||
"go/importer"
|
||||
"go/types"
|
||||
"log"
|
||||
"os"
|
||||
)
|
||||
|
||||
var packageHdr = `
|
||||
// Code generated by "go run compress_generate.go"; DO NOT EDIT.
|
||||
|
||||
package dns
|
||||
|
||||
`
|
||||
|
||||
// getTypeStruct will take a type and the package scope, and return the
|
||||
// (innermost) struct if the type is considered a RR type (currently defined as
|
||||
// those structs beginning with a RR_Header, could be redefined as implementing
|
||||
// the RR interface). The bool return value indicates if embedded structs were
|
||||
// resolved.
|
||||
func getTypeStruct(t types.Type, scope *types.Scope) (*types.Struct, bool) {
|
||||
st, ok := t.Underlying().(*types.Struct)
|
||||
if !ok {
|
||||
return nil, false
|
||||
}
|
||||
if st.Field(0).Type() == scope.Lookup("RR_Header").Type() {
|
||||
return st, false
|
||||
}
|
||||
if st.Field(0).Anonymous() {
|
||||
st, _ := getTypeStruct(st.Field(0).Type(), scope)
|
||||
return st, true
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func main() {
|
||||
// Import and type-check the package
|
||||
pkg, err := importer.Default().Import("github.com/miekg/dns")
|
||||
fatalIfErr(err)
|
||||
scope := pkg.Scope()
|
||||
|
||||
var domainTypes []string // Types that have a domain name in them (either compressible or not).
|
||||
var cdomainTypes []string // Types that have a compressible domain name in them (subset of domainType)
|
||||
Names:
|
||||
for _, name := range scope.Names() {
|
||||
o := scope.Lookup(name)
|
||||
if o == nil || !o.Exported() {
|
||||
continue
|
||||
}
|
||||
st, _ := getTypeStruct(o.Type(), scope)
|
||||
if st == nil {
|
||||
continue
|
||||
}
|
||||
if name == "PrivateRR" {
|
||||
continue
|
||||
}
|
||||
|
||||
if scope.Lookup("Type"+o.Name()) == nil && o.Name() != "RFC3597" {
|
||||
log.Fatalf("Constant Type%s does not exist.", o.Name())
|
||||
}
|
||||
|
||||
for i := 1; i < st.NumFields(); i++ {
|
||||
if _, ok := st.Field(i).Type().(*types.Slice); ok {
|
||||
if st.Tag(i) == `dns:"domain-name"` {
|
||||
domainTypes = append(domainTypes, o.Name())
|
||||
continue Names
|
||||
}
|
||||
if st.Tag(i) == `dns:"cdomain-name"` {
|
||||
cdomainTypes = append(cdomainTypes, o.Name())
|
||||
domainTypes = append(domainTypes, o.Name())
|
||||
continue Names
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
switch {
|
||||
case st.Tag(i) == `dns:"domain-name"`:
|
||||
domainTypes = append(domainTypes, o.Name())
|
||||
continue Names
|
||||
case st.Tag(i) == `dns:"cdomain-name"`:
|
||||
cdomainTypes = append(cdomainTypes, o.Name())
|
||||
domainTypes = append(domainTypes, o.Name())
|
||||
continue Names
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
b := &bytes.Buffer{}
|
||||
b.WriteString(packageHdr)
|
||||
|
||||
// compressionLenHelperType - all types that have domain-name/cdomain-name can be used for compressing names
|
||||
|
||||
fmt.Fprint(b, "func compressionLenHelperType(c map[string]int, r RR) {\n")
|
||||
fmt.Fprint(b, "switch x := r.(type) {\n")
|
||||
for _, name := range domainTypes {
|
||||
o := scope.Lookup(name)
|
||||
st, _ := getTypeStruct(o.Type(), scope)
|
||||
|
||||
fmt.Fprintf(b, "case *%s:\n", name)
|
||||
for i := 1; i < st.NumFields(); i++ {
|
||||
out := func(s string) { fmt.Fprintf(b, "compressionLenHelper(c, x.%s)\n", st.Field(i).Name()) }
|
||||
|
||||
if _, ok := st.Field(i).Type().(*types.Slice); ok {
|
||||
switch st.Tag(i) {
|
||||
case `dns:"domain-name"`:
|
||||
fallthrough
|
||||
case `dns:"cdomain-name"`:
|
||||
// For HIP we need to slice over the elements in this slice.
|
||||
fmt.Fprintf(b, `for i := range x.%s {
|
||||
compressionLenHelper(c, x.%s[i])
|
||||
}
|
||||
`, st.Field(i).Name(), st.Field(i).Name())
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
switch {
|
||||
case st.Tag(i) == `dns:"cdomain-name"`:
|
||||
fallthrough
|
||||
case st.Tag(i) == `dns:"domain-name"`:
|
||||
out(st.Field(i).Name())
|
||||
}
|
||||
}
|
||||
}
|
||||
fmt.Fprintln(b, "}\n}\n\n")
|
||||
|
||||
// compressionLenSearchType - search cdomain-tags types for compressible names.
|
||||
|
||||
fmt.Fprint(b, "func compressionLenSearchType(c map[string]int, r RR) (int, bool) {\n")
|
||||
fmt.Fprint(b, "switch x := r.(type) {\n")
|
||||
for _, name := range cdomainTypes {
|
||||
o := scope.Lookup(name)
|
||||
st, _ := getTypeStruct(o.Type(), scope)
|
||||
|
||||
fmt.Fprintf(b, "case *%s:\n", name)
|
||||
j := 1
|
||||
for i := 1; i < st.NumFields(); i++ {
|
||||
out := func(s string, j int) {
|
||||
fmt.Fprintf(b, "k%d, ok%d := compressionLenSearch(c, x.%s)\n", j, j, st.Field(i).Name())
|
||||
}
|
||||
|
||||
// There are no slice types with names that can be compressed.
|
||||
|
||||
switch {
|
||||
case st.Tag(i) == `dns:"cdomain-name"`:
|
||||
out(st.Field(i).Name(), j)
|
||||
j++
|
||||
}
|
||||
}
|
||||
k := "k1"
|
||||
ok := "ok1"
|
||||
for i := 2; i < j; i++ {
|
||||
k += fmt.Sprintf(" + k%d", i)
|
||||
ok += fmt.Sprintf(" && ok%d", i)
|
||||
}
|
||||
fmt.Fprintf(b, "return %s, %s\n", k, ok)
|
||||
}
|
||||
fmt.Fprintln(b, "}\nreturn 0, false\n}\n\n")
|
||||
|
||||
// gofmt
|
||||
res, err := format.Source(b.Bytes())
|
||||
if err != nil {
|
||||
b.WriteTo(os.Stderr)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
f, err := os.Create("zcompress.go")
|
||||
fatalIfErr(err)
|
||||
defer f.Close()
|
||||
f.Write(res)
|
||||
}
|
||||
|
||||
func fatalIfErr(err error) {
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
43
vendor/github.com/miekg/dns/dane.go
generated
vendored
Normal file
43
vendor/github.com/miekg/dns/dane.go
generated
vendored
Normal file
@ -0,0 +1,43 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"crypto/sha512"
|
||||
"crypto/x509"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
)
|
||||
|
||||
// CertificateToDANE converts a certificate to a hex string as used in the TLSA or SMIMEA records.
|
||||
func CertificateToDANE(selector, matchingType uint8, cert *x509.Certificate) (string, error) {
|
||||
switch matchingType {
|
||||
case 0:
|
||||
switch selector {
|
||||
case 0:
|
||||
return hex.EncodeToString(cert.Raw), nil
|
||||
case 1:
|
||||
return hex.EncodeToString(cert.RawSubjectPublicKeyInfo), nil
|
||||
}
|
||||
case 1:
|
||||
h := sha256.New()
|
||||
switch selector {
|
||||
case 0:
|
||||
h.Write(cert.Raw)
|
||||
return hex.EncodeToString(h.Sum(nil)), nil
|
||||
case 1:
|
||||
h.Write(cert.RawSubjectPublicKeyInfo)
|
||||
return hex.EncodeToString(h.Sum(nil)), nil
|
||||
}
|
||||
case 2:
|
||||
h := sha512.New()
|
||||
switch selector {
|
||||
case 0:
|
||||
h.Write(cert.Raw)
|
||||
return hex.EncodeToString(h.Sum(nil)), nil
|
||||
case 1:
|
||||
h.Write(cert.RawSubjectPublicKeyInfo)
|
||||
return hex.EncodeToString(h.Sum(nil)), nil
|
||||
}
|
||||
}
|
||||
return "", errors.New("dns: bad MatchingType or Selector")
|
||||
}
|
||||
288
vendor/github.com/miekg/dns/defaults.go
generated
vendored
Normal file
288
vendor/github.com/miekg/dns/defaults.go
generated
vendored
Normal file
@ -0,0 +1,288 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
const hexDigit = "0123456789abcdef"
|
||||
|
||||
// Everything is assumed in ClassINET.
|
||||
|
||||
// SetReply creates a reply message from a request message.
|
||||
func (dns *Msg) SetReply(request *Msg) *Msg {
|
||||
dns.Id = request.Id
|
||||
dns.Response = true
|
||||
dns.Opcode = request.Opcode
|
||||
if dns.Opcode == OpcodeQuery {
|
||||
dns.RecursionDesired = request.RecursionDesired // Copy rd bit
|
||||
dns.CheckingDisabled = request.CheckingDisabled // Copy cd bit
|
||||
}
|
||||
dns.Rcode = RcodeSuccess
|
||||
if len(request.Question) > 0 {
|
||||
dns.Question = make([]Question, 1)
|
||||
dns.Question[0] = request.Question[0]
|
||||
}
|
||||
return dns
|
||||
}
|
||||
|
||||
// SetQuestion creates a question message, it sets the Question
|
||||
// section, generates an Id and sets the RecursionDesired (RD)
|
||||
// bit to true.
|
||||
func (dns *Msg) SetQuestion(z string, t uint16) *Msg {
|
||||
dns.Id = Id()
|
||||
dns.RecursionDesired = true
|
||||
dns.Question = make([]Question, 1)
|
||||
dns.Question[0] = Question{z, t, ClassINET}
|
||||
return dns
|
||||
}
|
||||
|
||||
// SetNotify creates a notify message, it sets the Question
|
||||
// section, generates an Id and sets the Authoritative (AA)
|
||||
// bit to true.
|
||||
func (dns *Msg) SetNotify(z string) *Msg {
|
||||
dns.Opcode = OpcodeNotify
|
||||
dns.Authoritative = true
|
||||
dns.Id = Id()
|
||||
dns.Question = make([]Question, 1)
|
||||
dns.Question[0] = Question{z, TypeSOA, ClassINET}
|
||||
return dns
|
||||
}
|
||||
|
||||
// SetRcode creates an error message suitable for the request.
|
||||
func (dns *Msg) SetRcode(request *Msg, rcode int) *Msg {
|
||||
dns.SetReply(request)
|
||||
dns.Rcode = rcode
|
||||
return dns
|
||||
}
|
||||
|
||||
// SetRcodeFormatError creates a message with FormError set.
|
||||
func (dns *Msg) SetRcodeFormatError(request *Msg) *Msg {
|
||||
dns.Rcode = RcodeFormatError
|
||||
dns.Opcode = OpcodeQuery
|
||||
dns.Response = true
|
||||
dns.Authoritative = false
|
||||
dns.Id = request.Id
|
||||
return dns
|
||||
}
|
||||
|
||||
// SetUpdate makes the message a dynamic update message. It
|
||||
// sets the ZONE section to: z, TypeSOA, ClassINET.
|
||||
func (dns *Msg) SetUpdate(z string) *Msg {
|
||||
dns.Id = Id()
|
||||
dns.Response = false
|
||||
dns.Opcode = OpcodeUpdate
|
||||
dns.Compress = false // BIND9 cannot handle compression
|
||||
dns.Question = make([]Question, 1)
|
||||
dns.Question[0] = Question{z, TypeSOA, ClassINET}
|
||||
return dns
|
||||
}
|
||||
|
||||
// SetIxfr creates message for requesting an IXFR.
|
||||
func (dns *Msg) SetIxfr(z string, serial uint32, ns, mbox string) *Msg {
|
||||
dns.Id = Id()
|
||||
dns.Question = make([]Question, 1)
|
||||
dns.Ns = make([]RR, 1)
|
||||
s := new(SOA)
|
||||
s.Hdr = RR_Header{z, TypeSOA, ClassINET, defaultTtl, 0}
|
||||
s.Serial = serial
|
||||
s.Ns = ns
|
||||
s.Mbox = mbox
|
||||
dns.Question[0] = Question{z, TypeIXFR, ClassINET}
|
||||
dns.Ns[0] = s
|
||||
return dns
|
||||
}
|
||||
|
||||
// SetAxfr creates message for requesting an AXFR.
|
||||
func (dns *Msg) SetAxfr(z string) *Msg {
|
||||
dns.Id = Id()
|
||||
dns.Question = make([]Question, 1)
|
||||
dns.Question[0] = Question{z, TypeAXFR, ClassINET}
|
||||
return dns
|
||||
}
|
||||
|
||||
// SetTsig appends a TSIG RR to the message.
|
||||
// This is only a skeleton TSIG RR that is added as the last RR in the
|
||||
// additional section. The Tsig is calculated when the message is being send.
|
||||
func (dns *Msg) SetTsig(z, algo string, fudge uint16, timesigned int64) *Msg {
|
||||
t := new(TSIG)
|
||||
t.Hdr = RR_Header{z, TypeTSIG, ClassANY, 0, 0}
|
||||
t.Algorithm = algo
|
||||
t.Fudge = fudge
|
||||
t.TimeSigned = uint64(timesigned)
|
||||
t.OrigId = dns.Id
|
||||
dns.Extra = append(dns.Extra, t)
|
||||
return dns
|
||||
}
|
||||
|
||||
// SetEdns0 appends a EDNS0 OPT RR to the message.
|
||||
// TSIG should always the last RR in a message.
|
||||
func (dns *Msg) SetEdns0(udpsize uint16, do bool) *Msg {
|
||||
e := new(OPT)
|
||||
e.Hdr.Name = "."
|
||||
e.Hdr.Rrtype = TypeOPT
|
||||
e.SetUDPSize(udpsize)
|
||||
if do {
|
||||
e.SetDo()
|
||||
}
|
||||
dns.Extra = append(dns.Extra, e)
|
||||
return dns
|
||||
}
|
||||
|
||||
// IsTsig checks if the message has a TSIG record as the last record
|
||||
// in the additional section. It returns the TSIG record found or nil.
|
||||
func (dns *Msg) IsTsig() *TSIG {
|
||||
if len(dns.Extra) > 0 {
|
||||
if dns.Extra[len(dns.Extra)-1].Header().Rrtype == TypeTSIG {
|
||||
return dns.Extra[len(dns.Extra)-1].(*TSIG)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsEdns0 checks if the message has a EDNS0 (OPT) record, any EDNS0
|
||||
// record in the additional section will do. It returns the OPT record
|
||||
// found or nil.
|
||||
func (dns *Msg) IsEdns0() *OPT {
|
||||
// EDNS0 is at the end of the additional section, start there.
|
||||
// We might want to change this to *only* look at the last two
|
||||
// records. So we see TSIG and/or OPT - this a slightly bigger
|
||||
// change though.
|
||||
for i := len(dns.Extra) - 1; i >= 0; i-- {
|
||||
if dns.Extra[i].Header().Rrtype == TypeOPT {
|
||||
return dns.Extra[i].(*OPT)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsDomainName checks if s is a valid domain name, it returns the number of
|
||||
// labels and true, when a domain name is valid. Note that non fully qualified
|
||||
// domain name is considered valid, in this case the last label is counted in
|
||||
// the number of labels. When false is returned the number of labels is not
|
||||
// defined. Also note that this function is extremely liberal; almost any
|
||||
// string is a valid domain name as the DNS is 8 bit protocol. It checks if each
|
||||
// label fits in 63 characters, but there is no length check for the entire
|
||||
// string s. I.e. a domain name longer than 255 characters is considered valid.
|
||||
func IsDomainName(s string) (labels int, ok bool) {
|
||||
_, labels, err := packDomainName(s, nil, 0, nil, false)
|
||||
return labels, err == nil
|
||||
}
|
||||
|
||||
// IsSubDomain checks if child is indeed a child of the parent. If child and parent
|
||||
// are the same domain true is returned as well.
|
||||
func IsSubDomain(parent, child string) bool {
|
||||
// Entire child is contained in parent
|
||||
return CompareDomainName(parent, child) == CountLabel(parent)
|
||||
}
|
||||
|
||||
// IsMsg sanity checks buf and returns an error if it isn't a valid DNS packet.
|
||||
// The checking is performed on the binary payload.
|
||||
func IsMsg(buf []byte) error {
|
||||
// Header
|
||||
if len(buf) < 12 {
|
||||
return errors.New("dns: bad message header")
|
||||
}
|
||||
// Header: Opcode
|
||||
// TODO(miek): more checks here, e.g. check all header bits.
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsFqdn checks if a domain name is fully qualified.
|
||||
func IsFqdn(s string) bool {
|
||||
l := len(s)
|
||||
if l == 0 {
|
||||
return false
|
||||
}
|
||||
return s[l-1] == '.'
|
||||
}
|
||||
|
||||
// IsRRset checks if a set of RRs is a valid RRset as defined by RFC 2181.
|
||||
// This means the RRs need to have the same type, name, and class. Returns true
|
||||
// if the RR set is valid, otherwise false.
|
||||
func IsRRset(rrset []RR) bool {
|
||||
if len(rrset) == 0 {
|
||||
return false
|
||||
}
|
||||
if len(rrset) == 1 {
|
||||
return true
|
||||
}
|
||||
rrHeader := rrset[0].Header()
|
||||
rrType := rrHeader.Rrtype
|
||||
rrClass := rrHeader.Class
|
||||
rrName := rrHeader.Name
|
||||
|
||||
for _, rr := range rrset[1:] {
|
||||
curRRHeader := rr.Header()
|
||||
if curRRHeader.Rrtype != rrType || curRRHeader.Class != rrClass || curRRHeader.Name != rrName {
|
||||
// Mismatch between the records, so this is not a valid rrset for
|
||||
//signing/verifying
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Fqdn return the fully qualified domain name from s.
|
||||
// If s is already fully qualified, it behaves as the identity function.
|
||||
func Fqdn(s string) string {
|
||||
if IsFqdn(s) {
|
||||
return s
|
||||
}
|
||||
return s + "."
|
||||
}
|
||||
|
||||
// Copied from the official Go code.
|
||||
|
||||
// ReverseAddr returns the in-addr.arpa. or ip6.arpa. hostname of the IP
|
||||
// address suitable for reverse DNS (PTR) record lookups or an error if it fails
|
||||
// to parse the IP address.
|
||||
func ReverseAddr(addr string) (arpa string, err error) {
|
||||
ip := net.ParseIP(addr)
|
||||
if ip == nil {
|
||||
return "", &Error{err: "unrecognized address: " + addr}
|
||||
}
|
||||
if ip.To4() != nil {
|
||||
return strconv.Itoa(int(ip[15])) + "." + strconv.Itoa(int(ip[14])) + "." + strconv.Itoa(int(ip[13])) + "." +
|
||||
strconv.Itoa(int(ip[12])) + ".in-addr.arpa.", nil
|
||||
}
|
||||
// Must be IPv6
|
||||
buf := make([]byte, 0, len(ip)*4+len("ip6.arpa."))
|
||||
// Add it, in reverse, to the buffer
|
||||
for i := len(ip) - 1; i >= 0; i-- {
|
||||
v := ip[i]
|
||||
buf = append(buf, hexDigit[v&0xF])
|
||||
buf = append(buf, '.')
|
||||
buf = append(buf, hexDigit[v>>4])
|
||||
buf = append(buf, '.')
|
||||
}
|
||||
// Append "ip6.arpa." and return (buf already has the final .)
|
||||
buf = append(buf, "ip6.arpa."...)
|
||||
return string(buf), nil
|
||||
}
|
||||
|
||||
// String returns the string representation for the type t.
|
||||
func (t Type) String() string {
|
||||
if t1, ok := TypeToString[uint16(t)]; ok {
|
||||
return t1
|
||||
}
|
||||
return "TYPE" + strconv.Itoa(int(t))
|
||||
}
|
||||
|
||||
// String returns the string representation for the class c.
|
||||
func (c Class) String() string {
|
||||
if s, ok := ClassToString[uint16(c)]; ok {
|
||||
// Only emit mnemonics when they are unambiguous, specically ANY is in both.
|
||||
if _, ok := StringToType[s]; !ok {
|
||||
return s
|
||||
}
|
||||
}
|
||||
return "CLASS" + strconv.Itoa(int(c))
|
||||
}
|
||||
|
||||
// String returns the string representation for the name n.
|
||||
func (n Name) String() string {
|
||||
return sprintName(string(n))
|
||||
}
|
||||
107
vendor/github.com/miekg/dns/dns.go
generated
vendored
Normal file
107
vendor/github.com/miekg/dns/dns.go
generated
vendored
Normal file
@ -0,0 +1,107 @@
|
||||
package dns
|
||||
|
||||
import "strconv"
|
||||
|
||||
const (
|
||||
year68 = 1 << 31 // For RFC1982 (Serial Arithmetic) calculations in 32 bits.
|
||||
defaultTtl = 3600 // Default internal TTL.
|
||||
|
||||
// DefaultMsgSize is the standard default for messages larger than 512 bytes.
|
||||
DefaultMsgSize = 4096
|
||||
// MinMsgSize is the minimal size of a DNS packet.
|
||||
MinMsgSize = 512
|
||||
// MaxMsgSize is the largest possible DNS packet.
|
||||
MaxMsgSize = 65535
|
||||
)
|
||||
|
||||
// Error represents a DNS error.
|
||||
type Error struct{ err string }
|
||||
|
||||
func (e *Error) Error() string {
|
||||
if e == nil {
|
||||
return "dns: <nil>"
|
||||
}
|
||||
return "dns: " + e.err
|
||||
}
|
||||
|
||||
// An RR represents a resource record.
|
||||
type RR interface {
|
||||
// Header returns the header of an resource record. The header contains
|
||||
// everything up to the rdata.
|
||||
Header() *RR_Header
|
||||
// String returns the text representation of the resource record.
|
||||
String() string
|
||||
|
||||
// copy returns a copy of the RR
|
||||
copy() RR
|
||||
// len returns the length (in octets) of the uncompressed RR in wire format.
|
||||
len() int
|
||||
// pack packs an RR into wire format.
|
||||
pack([]byte, int, map[string]int, bool) (int, error)
|
||||
}
|
||||
|
||||
// RR_Header is the header all DNS resource records share.
|
||||
type RR_Header struct {
|
||||
Name string `dns:"cdomain-name"`
|
||||
Rrtype uint16
|
||||
Class uint16
|
||||
Ttl uint32
|
||||
Rdlength uint16 // Length of data after header.
|
||||
}
|
||||
|
||||
// Header returns itself. This is here to make RR_Header implements the RR interface.
|
||||
func (h *RR_Header) Header() *RR_Header { return h }
|
||||
|
||||
// Just to implement the RR interface.
|
||||
func (h *RR_Header) copy() RR { return nil }
|
||||
|
||||
func (h *RR_Header) copyHeader() *RR_Header {
|
||||
r := new(RR_Header)
|
||||
r.Name = h.Name
|
||||
r.Rrtype = h.Rrtype
|
||||
r.Class = h.Class
|
||||
r.Ttl = h.Ttl
|
||||
r.Rdlength = h.Rdlength
|
||||
return r
|
||||
}
|
||||
|
||||
func (h *RR_Header) String() string {
|
||||
var s string
|
||||
|
||||
if h.Rrtype == TypeOPT {
|
||||
s = ";"
|
||||
// and maybe other things
|
||||
}
|
||||
|
||||
s += sprintName(h.Name) + "\t"
|
||||
s += strconv.FormatInt(int64(h.Ttl), 10) + "\t"
|
||||
s += Class(h.Class).String() + "\t"
|
||||
s += Type(h.Rrtype).String() + "\t"
|
||||
return s
|
||||
}
|
||||
|
||||
func (h *RR_Header) len() int {
|
||||
l := len(h.Name) + 1
|
||||
l += 10 // rrtype(2) + class(2) + ttl(4) + rdlength(2)
|
||||
return l
|
||||
}
|
||||
|
||||
// ToRFC3597 converts a known RR to the unknown RR representation from RFC 3597.
|
||||
func (rr *RFC3597) ToRFC3597(r RR) error {
|
||||
buf := make([]byte, r.len()*2)
|
||||
off, err := PackRR(r, buf, 0, nil, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
buf = buf[:off]
|
||||
if int(r.Header().Rdlength) > off {
|
||||
return ErrBuf
|
||||
}
|
||||
|
||||
rfc3597, _, err := unpackRFC3597(*r.Header(), buf, off-int(r.Header().Rdlength))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*rr = *rfc3597.(*RFC3597)
|
||||
return nil
|
||||
}
|
||||
784
vendor/github.com/miekg/dns/dnssec.go
generated
vendored
Normal file
784
vendor/github.com/miekg/dns/dnssec.go
generated
vendored
Normal file
@ -0,0 +1,784 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto"
|
||||
"crypto/dsa"
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
_ "crypto/md5"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
_ "crypto/sha1"
|
||||
_ "crypto/sha256"
|
||||
_ "crypto/sha512"
|
||||
"encoding/asn1"
|
||||
"encoding/binary"
|
||||
"encoding/hex"
|
||||
"math/big"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"golang.org/x/crypto/ed25519"
|
||||
)
|
||||
|
||||
// DNSSEC encryption algorithm codes.
|
||||
const (
|
||||
_ uint8 = iota
|
||||
RSAMD5
|
||||
DH
|
||||
DSA
|
||||
_ // Skip 4, RFC 6725, section 2.1
|
||||
RSASHA1
|
||||
DSANSEC3SHA1
|
||||
RSASHA1NSEC3SHA1
|
||||
RSASHA256
|
||||
_ // Skip 9, RFC 6725, section 2.1
|
||||
RSASHA512
|
||||
_ // Skip 11, RFC 6725, section 2.1
|
||||
ECCGOST
|
||||
ECDSAP256SHA256
|
||||
ECDSAP384SHA384
|
||||
ED25519
|
||||
ED448
|
||||
INDIRECT uint8 = 252
|
||||
PRIVATEDNS uint8 = 253 // Private (experimental keys)
|
||||
PRIVATEOID uint8 = 254
|
||||
)
|
||||
|
||||
// AlgorithmToString is a map of algorithm IDs to algorithm names.
|
||||
var AlgorithmToString = map[uint8]string{
|
||||
RSAMD5: "RSAMD5",
|
||||
DH: "DH",
|
||||
DSA: "DSA",
|
||||
RSASHA1: "RSASHA1",
|
||||
DSANSEC3SHA1: "DSA-NSEC3-SHA1",
|
||||
RSASHA1NSEC3SHA1: "RSASHA1-NSEC3-SHA1",
|
||||
RSASHA256: "RSASHA256",
|
||||
RSASHA512: "RSASHA512",
|
||||
ECCGOST: "ECC-GOST",
|
||||
ECDSAP256SHA256: "ECDSAP256SHA256",
|
||||
ECDSAP384SHA384: "ECDSAP384SHA384",
|
||||
ED25519: "ED25519",
|
||||
ED448: "ED448",
|
||||
INDIRECT: "INDIRECT",
|
||||
PRIVATEDNS: "PRIVATEDNS",
|
||||
PRIVATEOID: "PRIVATEOID",
|
||||
}
|
||||
|
||||
// StringToAlgorithm is the reverse of AlgorithmToString.
|
||||
var StringToAlgorithm = reverseInt8(AlgorithmToString)
|
||||
|
||||
// AlgorithmToHash is a map of algorithm crypto hash IDs to crypto.Hash's.
|
||||
var AlgorithmToHash = map[uint8]crypto.Hash{
|
||||
RSAMD5: crypto.MD5, // Deprecated in RFC 6725
|
||||
RSASHA1: crypto.SHA1,
|
||||
RSASHA1NSEC3SHA1: crypto.SHA1,
|
||||
RSASHA256: crypto.SHA256,
|
||||
ECDSAP256SHA256: crypto.SHA256,
|
||||
ECDSAP384SHA384: crypto.SHA384,
|
||||
RSASHA512: crypto.SHA512,
|
||||
ED25519: crypto.Hash(0),
|
||||
}
|
||||
|
||||
// DNSSEC hashing algorithm codes.
|
||||
const (
|
||||
_ uint8 = iota
|
||||
SHA1 // RFC 4034
|
||||
SHA256 // RFC 4509
|
||||
GOST94 // RFC 5933
|
||||
SHA384 // Experimental
|
||||
SHA512 // Experimental
|
||||
)
|
||||
|
||||
// HashToString is a map of hash IDs to names.
|
||||
var HashToString = map[uint8]string{
|
||||
SHA1: "SHA1",
|
||||
SHA256: "SHA256",
|
||||
GOST94: "GOST94",
|
||||
SHA384: "SHA384",
|
||||
SHA512: "SHA512",
|
||||
}
|
||||
|
||||
// StringToHash is a map of names to hash IDs.
|
||||
var StringToHash = reverseInt8(HashToString)
|
||||
|
||||
// DNSKEY flag values.
|
||||
const (
|
||||
SEP = 1
|
||||
REVOKE = 1 << 7
|
||||
ZONE = 1 << 8
|
||||
)
|
||||
|
||||
// The RRSIG needs to be converted to wireformat with some of the rdata (the signature) missing.
|
||||
type rrsigWireFmt struct {
|
||||
TypeCovered uint16
|
||||
Algorithm uint8
|
||||
Labels uint8
|
||||
OrigTtl uint32
|
||||
Expiration uint32
|
||||
Inception uint32
|
||||
KeyTag uint16
|
||||
SignerName string `dns:"domain-name"`
|
||||
/* No Signature */
|
||||
}
|
||||
|
||||
// Used for converting DNSKEY's rdata to wirefmt.
|
||||
type dnskeyWireFmt struct {
|
||||
Flags uint16
|
||||
Protocol uint8
|
||||
Algorithm uint8
|
||||
PublicKey string `dns:"base64"`
|
||||
/* Nothing is left out */
|
||||
}
|
||||
|
||||
func divRoundUp(a, b int) int {
|
||||
return (a + b - 1) / b
|
||||
}
|
||||
|
||||
// KeyTag calculates the keytag (or key-id) of the DNSKEY.
|
||||
func (k *DNSKEY) KeyTag() uint16 {
|
||||
if k == nil {
|
||||
return 0
|
||||
}
|
||||
var keytag int
|
||||
switch k.Algorithm {
|
||||
case RSAMD5:
|
||||
// Look at the bottom two bytes of the modules, which the last
|
||||
// item in the pubkey. We could do this faster by looking directly
|
||||
// at the base64 values. But I'm lazy.
|
||||
modulus, _ := fromBase64([]byte(k.PublicKey))
|
||||
if len(modulus) > 1 {
|
||||
x := binary.BigEndian.Uint16(modulus[len(modulus)-2:])
|
||||
keytag = int(x)
|
||||
}
|
||||
default:
|
||||
keywire := new(dnskeyWireFmt)
|
||||
keywire.Flags = k.Flags
|
||||
keywire.Protocol = k.Protocol
|
||||
keywire.Algorithm = k.Algorithm
|
||||
keywire.PublicKey = k.PublicKey
|
||||
wire := make([]byte, DefaultMsgSize)
|
||||
n, err := packKeyWire(keywire, wire)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
wire = wire[:n]
|
||||
for i, v := range wire {
|
||||
if i&1 != 0 {
|
||||
keytag += int(v) // must be larger than uint32
|
||||
} else {
|
||||
keytag += int(v) << 8
|
||||
}
|
||||
}
|
||||
keytag += (keytag >> 16) & 0xFFFF
|
||||
keytag &= 0xFFFF
|
||||
}
|
||||
return uint16(keytag)
|
||||
}
|
||||
|
||||
// ToDS converts a DNSKEY record to a DS record.
|
||||
func (k *DNSKEY) ToDS(h uint8) *DS {
|
||||
if k == nil {
|
||||
return nil
|
||||
}
|
||||
ds := new(DS)
|
||||
ds.Hdr.Name = k.Hdr.Name
|
||||
ds.Hdr.Class = k.Hdr.Class
|
||||
ds.Hdr.Rrtype = TypeDS
|
||||
ds.Hdr.Ttl = k.Hdr.Ttl
|
||||
ds.Algorithm = k.Algorithm
|
||||
ds.DigestType = h
|
||||
ds.KeyTag = k.KeyTag()
|
||||
|
||||
keywire := new(dnskeyWireFmt)
|
||||
keywire.Flags = k.Flags
|
||||
keywire.Protocol = k.Protocol
|
||||
keywire.Algorithm = k.Algorithm
|
||||
keywire.PublicKey = k.PublicKey
|
||||
wire := make([]byte, DefaultMsgSize)
|
||||
n, err := packKeyWire(keywire, wire)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
wire = wire[:n]
|
||||
|
||||
owner := make([]byte, 255)
|
||||
off, err1 := PackDomainName(strings.ToLower(k.Hdr.Name), owner, 0, nil, false)
|
||||
if err1 != nil {
|
||||
return nil
|
||||
}
|
||||
owner = owner[:off]
|
||||
// RFC4034:
|
||||
// digest = digest_algorithm( DNSKEY owner name | DNSKEY RDATA);
|
||||
// "|" denotes concatenation
|
||||
// DNSKEY RDATA = Flags | Protocol | Algorithm | Public Key.
|
||||
|
||||
var hash crypto.Hash
|
||||
switch h {
|
||||
case SHA1:
|
||||
hash = crypto.SHA1
|
||||
case SHA256:
|
||||
hash = crypto.SHA256
|
||||
case SHA384:
|
||||
hash = crypto.SHA384
|
||||
case SHA512:
|
||||
hash = crypto.SHA512
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
|
||||
s := hash.New()
|
||||
s.Write(owner)
|
||||
s.Write(wire)
|
||||
ds.Digest = hex.EncodeToString(s.Sum(nil))
|
||||
return ds
|
||||
}
|
||||
|
||||
// ToCDNSKEY converts a DNSKEY record to a CDNSKEY record.
|
||||
func (k *DNSKEY) ToCDNSKEY() *CDNSKEY {
|
||||
c := &CDNSKEY{DNSKEY: *k}
|
||||
c.Hdr = *k.Hdr.copyHeader()
|
||||
c.Hdr.Rrtype = TypeCDNSKEY
|
||||
return c
|
||||
}
|
||||
|
||||
// ToCDS converts a DS record to a CDS record.
|
||||
func (d *DS) ToCDS() *CDS {
|
||||
c := &CDS{DS: *d}
|
||||
c.Hdr = *d.Hdr.copyHeader()
|
||||
c.Hdr.Rrtype = TypeCDS
|
||||
return c
|
||||
}
|
||||
|
||||
// Sign signs an RRSet. The signature needs to be filled in with the values:
|
||||
// Inception, Expiration, KeyTag, SignerName and Algorithm. The rest is copied
|
||||
// from the RRset. Sign returns a non-nill error when the signing went OK.
|
||||
// There is no check if RRSet is a proper (RFC 2181) RRSet. If OrigTTL is non
|
||||
// zero, it is used as-is, otherwise the TTL of the RRset is used as the
|
||||
// OrigTTL.
|
||||
func (rr *RRSIG) Sign(k crypto.Signer, rrset []RR) error {
|
||||
if k == nil {
|
||||
return ErrPrivKey
|
||||
}
|
||||
// s.Inception and s.Expiration may be 0 (rollover etc.), the rest must be set
|
||||
if rr.KeyTag == 0 || len(rr.SignerName) == 0 || rr.Algorithm == 0 {
|
||||
return ErrKey
|
||||
}
|
||||
|
||||
rr.Hdr.Rrtype = TypeRRSIG
|
||||
rr.Hdr.Name = rrset[0].Header().Name
|
||||
rr.Hdr.Class = rrset[0].Header().Class
|
||||
if rr.OrigTtl == 0 { // If set don't override
|
||||
rr.OrigTtl = rrset[0].Header().Ttl
|
||||
}
|
||||
rr.TypeCovered = rrset[0].Header().Rrtype
|
||||
rr.Labels = uint8(CountLabel(rrset[0].Header().Name))
|
||||
|
||||
if strings.HasPrefix(rrset[0].Header().Name, "*") {
|
||||
rr.Labels-- // wildcard, remove from label count
|
||||
}
|
||||
|
||||
sigwire := new(rrsigWireFmt)
|
||||
sigwire.TypeCovered = rr.TypeCovered
|
||||
sigwire.Algorithm = rr.Algorithm
|
||||
sigwire.Labels = rr.Labels
|
||||
sigwire.OrigTtl = rr.OrigTtl
|
||||
sigwire.Expiration = rr.Expiration
|
||||
sigwire.Inception = rr.Inception
|
||||
sigwire.KeyTag = rr.KeyTag
|
||||
// For signing, lowercase this name
|
||||
sigwire.SignerName = strings.ToLower(rr.SignerName)
|
||||
|
||||
// Create the desired binary blob
|
||||
signdata := make([]byte, DefaultMsgSize)
|
||||
n, err := packSigWire(sigwire, signdata)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
signdata = signdata[:n]
|
||||
wire, err := rawSignatureData(rrset, rr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
hash, ok := AlgorithmToHash[rr.Algorithm]
|
||||
if !ok {
|
||||
return ErrAlg
|
||||
}
|
||||
|
||||
switch rr.Algorithm {
|
||||
case ED25519:
|
||||
// ed25519 signs the raw message and performs hashing internally.
|
||||
// All other supported signature schemes operate over the pre-hashed
|
||||
// message, and thus ed25519 must be handled separately here.
|
||||
//
|
||||
// The raw message is passed directly into sign and crypto.Hash(0) is
|
||||
// used to signal to the crypto.Signer that the data has not been hashed.
|
||||
signature, err := sign(k, append(signdata, wire...), crypto.Hash(0), rr.Algorithm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rr.Signature = toBase64(signature)
|
||||
default:
|
||||
h := hash.New()
|
||||
h.Write(signdata)
|
||||
h.Write(wire)
|
||||
|
||||
signature, err := sign(k, h.Sum(nil), hash, rr.Algorithm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rr.Signature = toBase64(signature)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func sign(k crypto.Signer, hashed []byte, hash crypto.Hash, alg uint8) ([]byte, error) {
|
||||
signature, err := k.Sign(rand.Reader, hashed, hash)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
switch alg {
|
||||
case RSASHA1, RSASHA1NSEC3SHA1, RSASHA256, RSASHA512:
|
||||
return signature, nil
|
||||
|
||||
case ECDSAP256SHA256, ECDSAP384SHA384:
|
||||
ecdsaSignature := &struct {
|
||||
R, S *big.Int
|
||||
}{}
|
||||
if _, err := asn1.Unmarshal(signature, ecdsaSignature); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var intlen int
|
||||
switch alg {
|
||||
case ECDSAP256SHA256:
|
||||
intlen = 32
|
||||
case ECDSAP384SHA384:
|
||||
intlen = 48
|
||||
}
|
||||
|
||||
signature := intToBytes(ecdsaSignature.R, intlen)
|
||||
signature = append(signature, intToBytes(ecdsaSignature.S, intlen)...)
|
||||
return signature, nil
|
||||
|
||||
// There is no defined interface for what a DSA backed crypto.Signer returns
|
||||
case DSA, DSANSEC3SHA1:
|
||||
// t := divRoundUp(divRoundUp(p.PublicKey.Y.BitLen(), 8)-64, 8)
|
||||
// signature := []byte{byte(t)}
|
||||
// signature = append(signature, intToBytes(r1, 20)...)
|
||||
// signature = append(signature, intToBytes(s1, 20)...)
|
||||
// rr.Signature = signature
|
||||
|
||||
case ED25519:
|
||||
return signature, nil
|
||||
}
|
||||
|
||||
return nil, ErrAlg
|
||||
}
|
||||
|
||||
// Verify validates an RRSet with the signature and key. This is only the
|
||||
// cryptographic test, the signature validity period must be checked separately.
|
||||
// This function copies the rdata of some RRs (to lowercase domain names) for the validation to work.
|
||||
func (rr *RRSIG) Verify(k *DNSKEY, rrset []RR) error {
|
||||
// First the easy checks
|
||||
if !IsRRset(rrset) {
|
||||
return ErrRRset
|
||||
}
|
||||
if rr.KeyTag != k.KeyTag() {
|
||||
return ErrKey
|
||||
}
|
||||
if rr.Hdr.Class != k.Hdr.Class {
|
||||
return ErrKey
|
||||
}
|
||||
if rr.Algorithm != k.Algorithm {
|
||||
return ErrKey
|
||||
}
|
||||
if strings.ToLower(rr.SignerName) != strings.ToLower(k.Hdr.Name) {
|
||||
return ErrKey
|
||||
}
|
||||
if k.Protocol != 3 {
|
||||
return ErrKey
|
||||
}
|
||||
|
||||
// IsRRset checked that we have at least one RR and that the RRs in
|
||||
// the set have consistent type, class, and name. Also check that type and
|
||||
// class matches the RRSIG record.
|
||||
if rrset[0].Header().Class != rr.Hdr.Class {
|
||||
return ErrRRset
|
||||
}
|
||||
if rrset[0].Header().Rrtype != rr.TypeCovered {
|
||||
return ErrRRset
|
||||
}
|
||||
|
||||
// RFC 4035 5.3.2. Reconstructing the Signed Data
|
||||
// Copy the sig, except the rrsig data
|
||||
sigwire := new(rrsigWireFmt)
|
||||
sigwire.TypeCovered = rr.TypeCovered
|
||||
sigwire.Algorithm = rr.Algorithm
|
||||
sigwire.Labels = rr.Labels
|
||||
sigwire.OrigTtl = rr.OrigTtl
|
||||
sigwire.Expiration = rr.Expiration
|
||||
sigwire.Inception = rr.Inception
|
||||
sigwire.KeyTag = rr.KeyTag
|
||||
sigwire.SignerName = strings.ToLower(rr.SignerName)
|
||||
// Create the desired binary blob
|
||||
signeddata := make([]byte, DefaultMsgSize)
|
||||
n, err := packSigWire(sigwire, signeddata)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
signeddata = signeddata[:n]
|
||||
wire, err := rawSignatureData(rrset, rr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sigbuf := rr.sigBuf() // Get the binary signature data
|
||||
if rr.Algorithm == PRIVATEDNS { // PRIVATEOID
|
||||
// TODO(miek)
|
||||
// remove the domain name and assume its ours?
|
||||
}
|
||||
|
||||
hash, ok := AlgorithmToHash[rr.Algorithm]
|
||||
if !ok {
|
||||
return ErrAlg
|
||||
}
|
||||
|
||||
switch rr.Algorithm {
|
||||
case RSASHA1, RSASHA1NSEC3SHA1, RSASHA256, RSASHA512, RSAMD5:
|
||||
// TODO(mg): this can be done quicker, ie. cache the pubkey data somewhere??
|
||||
pubkey := k.publicKeyRSA() // Get the key
|
||||
if pubkey == nil {
|
||||
return ErrKey
|
||||
}
|
||||
|
||||
h := hash.New()
|
||||
h.Write(signeddata)
|
||||
h.Write(wire)
|
||||
return rsa.VerifyPKCS1v15(pubkey, hash, h.Sum(nil), sigbuf)
|
||||
|
||||
case ECDSAP256SHA256, ECDSAP384SHA384:
|
||||
pubkey := k.publicKeyECDSA()
|
||||
if pubkey == nil {
|
||||
return ErrKey
|
||||
}
|
||||
|
||||
// Split sigbuf into the r and s coordinates
|
||||
r := new(big.Int).SetBytes(sigbuf[:len(sigbuf)/2])
|
||||
s := new(big.Int).SetBytes(sigbuf[len(sigbuf)/2:])
|
||||
|
||||
h := hash.New()
|
||||
h.Write(signeddata)
|
||||
h.Write(wire)
|
||||
if ecdsa.Verify(pubkey, h.Sum(nil), r, s) {
|
||||
return nil
|
||||
}
|
||||
return ErrSig
|
||||
|
||||
case ED25519:
|
||||
pubkey := k.publicKeyED25519()
|
||||
if pubkey == nil {
|
||||
return ErrKey
|
||||
}
|
||||
|
||||
if ed25519.Verify(pubkey, append(signeddata, wire...), sigbuf) {
|
||||
return nil
|
||||
}
|
||||
return ErrSig
|
||||
|
||||
default:
|
||||
return ErrAlg
|
||||
}
|
||||
}
|
||||
|
||||
// ValidityPeriod uses RFC1982 serial arithmetic to calculate
|
||||
// if a signature period is valid. If t is the zero time, the
|
||||
// current time is taken other t is. Returns true if the signature
|
||||
// is valid at the given time, otherwise returns false.
|
||||
func (rr *RRSIG) ValidityPeriod(t time.Time) bool {
|
||||
var utc int64
|
||||
if t.IsZero() {
|
||||
utc = time.Now().UTC().Unix()
|
||||
} else {
|
||||
utc = t.UTC().Unix()
|
||||
}
|
||||
modi := (int64(rr.Inception) - utc) / year68
|
||||
mode := (int64(rr.Expiration) - utc) / year68
|
||||
ti := int64(rr.Inception) + (modi * year68)
|
||||
te := int64(rr.Expiration) + (mode * year68)
|
||||
return ti <= utc && utc <= te
|
||||
}
|
||||
|
||||
// Return the signatures base64 encodedig sigdata as a byte slice.
|
||||
func (rr *RRSIG) sigBuf() []byte {
|
||||
sigbuf, err := fromBase64([]byte(rr.Signature))
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return sigbuf
|
||||
}
|
||||
|
||||
// publicKeyRSA returns the RSA public key from a DNSKEY record.
|
||||
func (k *DNSKEY) publicKeyRSA() *rsa.PublicKey {
|
||||
keybuf, err := fromBase64([]byte(k.PublicKey))
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// RFC 2537/3110, section 2. RSA Public KEY Resource Records
|
||||
// Length is in the 0th byte, unless its zero, then it
|
||||
// it in bytes 1 and 2 and its a 16 bit number
|
||||
explen := uint16(keybuf[0])
|
||||
keyoff := 1
|
||||
if explen == 0 {
|
||||
explen = uint16(keybuf[1])<<8 | uint16(keybuf[2])
|
||||
keyoff = 3
|
||||
}
|
||||
pubkey := new(rsa.PublicKey)
|
||||
|
||||
pubkey.N = big.NewInt(0)
|
||||
shift := uint64((explen - 1) * 8)
|
||||
expo := uint64(0)
|
||||
for i := int(explen - 1); i > 0; i-- {
|
||||
expo += uint64(keybuf[keyoff+i]) << shift
|
||||
shift -= 8
|
||||
}
|
||||
// Remainder
|
||||
expo += uint64(keybuf[keyoff])
|
||||
if expo > (2<<31)+1 {
|
||||
// Larger expo than supported.
|
||||
// println("dns: F5 primes (or larger) are not supported")
|
||||
return nil
|
||||
}
|
||||
pubkey.E = int(expo)
|
||||
|
||||
pubkey.N.SetBytes(keybuf[keyoff+int(explen):])
|
||||
return pubkey
|
||||
}
|
||||
|
||||
// publicKeyECDSA returns the Curve public key from the DNSKEY record.
|
||||
func (k *DNSKEY) publicKeyECDSA() *ecdsa.PublicKey {
|
||||
keybuf, err := fromBase64([]byte(k.PublicKey))
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
pubkey := new(ecdsa.PublicKey)
|
||||
switch k.Algorithm {
|
||||
case ECDSAP256SHA256:
|
||||
pubkey.Curve = elliptic.P256()
|
||||
if len(keybuf) != 64 {
|
||||
// wrongly encoded key
|
||||
return nil
|
||||
}
|
||||
case ECDSAP384SHA384:
|
||||
pubkey.Curve = elliptic.P384()
|
||||
if len(keybuf) != 96 {
|
||||
// Wrongly encoded key
|
||||
return nil
|
||||
}
|
||||
}
|
||||
pubkey.X = big.NewInt(0)
|
||||
pubkey.X.SetBytes(keybuf[:len(keybuf)/2])
|
||||
pubkey.Y = big.NewInt(0)
|
||||
pubkey.Y.SetBytes(keybuf[len(keybuf)/2:])
|
||||
return pubkey
|
||||
}
|
||||
|
||||
func (k *DNSKEY) publicKeyDSA() *dsa.PublicKey {
|
||||
keybuf, err := fromBase64([]byte(k.PublicKey))
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
if len(keybuf) < 22 {
|
||||
return nil
|
||||
}
|
||||
t, keybuf := int(keybuf[0]), keybuf[1:]
|
||||
size := 64 + t*8
|
||||
q, keybuf := keybuf[:20], keybuf[20:]
|
||||
if len(keybuf) != 3*size {
|
||||
return nil
|
||||
}
|
||||
p, keybuf := keybuf[:size], keybuf[size:]
|
||||
g, y := keybuf[:size], keybuf[size:]
|
||||
pubkey := new(dsa.PublicKey)
|
||||
pubkey.Parameters.Q = big.NewInt(0).SetBytes(q)
|
||||
pubkey.Parameters.P = big.NewInt(0).SetBytes(p)
|
||||
pubkey.Parameters.G = big.NewInt(0).SetBytes(g)
|
||||
pubkey.Y = big.NewInt(0).SetBytes(y)
|
||||
return pubkey
|
||||
}
|
||||
|
||||
func (k *DNSKEY) publicKeyED25519() ed25519.PublicKey {
|
||||
keybuf, err := fromBase64([]byte(k.PublicKey))
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
if len(keybuf) != ed25519.PublicKeySize {
|
||||
return nil
|
||||
}
|
||||
return keybuf
|
||||
}
|
||||
|
||||
type wireSlice [][]byte
|
||||
|
||||
func (p wireSlice) Len() int { return len(p) }
|
||||
func (p wireSlice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
|
||||
func (p wireSlice) Less(i, j int) bool {
|
||||
_, ioff, _ := UnpackDomainName(p[i], 0)
|
||||
_, joff, _ := UnpackDomainName(p[j], 0)
|
||||
return bytes.Compare(p[i][ioff+10:], p[j][joff+10:]) < 0
|
||||
}
|
||||
|
||||
// Return the raw signature data.
|
||||
func rawSignatureData(rrset []RR, s *RRSIG) (buf []byte, err error) {
|
||||
wires := make(wireSlice, len(rrset))
|
||||
for i, r := range rrset {
|
||||
r1 := r.copy()
|
||||
r1.Header().Ttl = s.OrigTtl
|
||||
labels := SplitDomainName(r1.Header().Name)
|
||||
// 6.2. Canonical RR Form. (4) - wildcards
|
||||
if len(labels) > int(s.Labels) {
|
||||
// Wildcard
|
||||
r1.Header().Name = "*." + strings.Join(labels[len(labels)-int(s.Labels):], ".") + "."
|
||||
}
|
||||
// RFC 4034: 6.2. Canonical RR Form. (2) - domain name to lowercase
|
||||
r1.Header().Name = strings.ToLower(r1.Header().Name)
|
||||
// 6.2. Canonical RR Form. (3) - domain rdata to lowercase.
|
||||
// NS, MD, MF, CNAME, SOA, MB, MG, MR, PTR,
|
||||
// HINFO, MINFO, MX, RP, AFSDB, RT, SIG, PX, NXT, NAPTR, KX,
|
||||
// SRV, DNAME, A6
|
||||
//
|
||||
// RFC 6840 - Clarifications and Implementation Notes for DNS Security (DNSSEC):
|
||||
// Section 6.2 of [RFC4034] also erroneously lists HINFO as a record
|
||||
// that needs conversion to lowercase, and twice at that. Since HINFO
|
||||
// records contain no domain names, they are not subject to case
|
||||
// conversion.
|
||||
switch x := r1.(type) {
|
||||
case *NS:
|
||||
x.Ns = strings.ToLower(x.Ns)
|
||||
case *MD:
|
||||
x.Md = strings.ToLower(x.Md)
|
||||
case *MF:
|
||||
x.Mf = strings.ToLower(x.Mf)
|
||||
case *CNAME:
|
||||
x.Target = strings.ToLower(x.Target)
|
||||
case *SOA:
|
||||
x.Ns = strings.ToLower(x.Ns)
|
||||
x.Mbox = strings.ToLower(x.Mbox)
|
||||
case *MB:
|
||||
x.Mb = strings.ToLower(x.Mb)
|
||||
case *MG:
|
||||
x.Mg = strings.ToLower(x.Mg)
|
||||
case *MR:
|
||||
x.Mr = strings.ToLower(x.Mr)
|
||||
case *PTR:
|
||||
x.Ptr = strings.ToLower(x.Ptr)
|
||||
case *MINFO:
|
||||
x.Rmail = strings.ToLower(x.Rmail)
|
||||
x.Email = strings.ToLower(x.Email)
|
||||
case *MX:
|
||||
x.Mx = strings.ToLower(x.Mx)
|
||||
case *RP:
|
||||
x.Mbox = strings.ToLower(x.Mbox)
|
||||
x.Txt = strings.ToLower(x.Txt)
|
||||
case *AFSDB:
|
||||
x.Hostname = strings.ToLower(x.Hostname)
|
||||
case *RT:
|
||||
x.Host = strings.ToLower(x.Host)
|
||||
case *SIG:
|
||||
x.SignerName = strings.ToLower(x.SignerName)
|
||||
case *PX:
|
||||
x.Map822 = strings.ToLower(x.Map822)
|
||||
x.Mapx400 = strings.ToLower(x.Mapx400)
|
||||
case *NAPTR:
|
||||
x.Replacement = strings.ToLower(x.Replacement)
|
||||
case *KX:
|
||||
x.Exchanger = strings.ToLower(x.Exchanger)
|
||||
case *SRV:
|
||||
x.Target = strings.ToLower(x.Target)
|
||||
case *DNAME:
|
||||
x.Target = strings.ToLower(x.Target)
|
||||
}
|
||||
// 6.2. Canonical RR Form. (5) - origTTL
|
||||
wire := make([]byte, r1.len()+1) // +1 to be safe(r)
|
||||
off, err1 := PackRR(r1, wire, 0, nil, false)
|
||||
if err1 != nil {
|
||||
return nil, err1
|
||||
}
|
||||
wire = wire[:off]
|
||||
wires[i] = wire
|
||||
}
|
||||
sort.Sort(wires)
|
||||
for i, wire := range wires {
|
||||
if i > 0 && bytes.Equal(wire, wires[i-1]) {
|
||||
continue
|
||||
}
|
||||
buf = append(buf, wire...)
|
||||
}
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
func packSigWire(sw *rrsigWireFmt, msg []byte) (int, error) {
|
||||
// copied from zmsg.go RRSIG packing
|
||||
off, err := packUint16(sw.TypeCovered, msg, 0)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = packUint8(sw.Algorithm, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = packUint8(sw.Labels, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = packUint32(sw.OrigTtl, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = packUint32(sw.Expiration, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = packUint32(sw.Inception, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = packUint16(sw.KeyTag, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = PackDomainName(sw.SignerName, msg, off, nil, false)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func packKeyWire(dw *dnskeyWireFmt, msg []byte) (int, error) {
|
||||
// copied from zmsg.go DNSKEY packing
|
||||
off, err := packUint16(dw.Flags, msg, 0)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = packUint8(dw.Protocol, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = packUint8(dw.Algorithm, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
off, err = packStringBase64(dw.PublicKey, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
178
vendor/github.com/miekg/dns/dnssec_keygen.go
generated
vendored
Normal file
178
vendor/github.com/miekg/dns/dnssec_keygen.go
generated
vendored
Normal file
@ -0,0 +1,178 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/dsa"
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"math/big"
|
||||
|
||||
"golang.org/x/crypto/ed25519"
|
||||
)
|
||||
|
||||
// Generate generates a DNSKEY of the given bit size.
|
||||
// The public part is put inside the DNSKEY record.
|
||||
// The Algorithm in the key must be set as this will define
|
||||
// what kind of DNSKEY will be generated.
|
||||
// The ECDSA algorithms imply a fixed keysize, in that case
|
||||
// bits should be set to the size of the algorithm.
|
||||
func (k *DNSKEY) Generate(bits int) (crypto.PrivateKey, error) {
|
||||
switch k.Algorithm {
|
||||
case DSA, DSANSEC3SHA1:
|
||||
if bits != 1024 {
|
||||
return nil, ErrKeySize
|
||||
}
|
||||
case RSAMD5, RSASHA1, RSASHA256, RSASHA1NSEC3SHA1:
|
||||
if bits < 512 || bits > 4096 {
|
||||
return nil, ErrKeySize
|
||||
}
|
||||
case RSASHA512:
|
||||
if bits < 1024 || bits > 4096 {
|
||||
return nil, ErrKeySize
|
||||
}
|
||||
case ECDSAP256SHA256:
|
||||
if bits != 256 {
|
||||
return nil, ErrKeySize
|
||||
}
|
||||
case ECDSAP384SHA384:
|
||||
if bits != 384 {
|
||||
return nil, ErrKeySize
|
||||
}
|
||||
case ED25519:
|
||||
if bits != 256 {
|
||||
return nil, ErrKeySize
|
||||
}
|
||||
}
|
||||
|
||||
switch k.Algorithm {
|
||||
case DSA, DSANSEC3SHA1:
|
||||
params := new(dsa.Parameters)
|
||||
if err := dsa.GenerateParameters(params, rand.Reader, dsa.L1024N160); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
priv := new(dsa.PrivateKey)
|
||||
priv.PublicKey.Parameters = *params
|
||||
err := dsa.GenerateKey(priv, rand.Reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
k.setPublicKeyDSA(params.Q, params.P, params.G, priv.PublicKey.Y)
|
||||
return priv, nil
|
||||
case RSAMD5, RSASHA1, RSASHA256, RSASHA512, RSASHA1NSEC3SHA1:
|
||||
priv, err := rsa.GenerateKey(rand.Reader, bits)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
k.setPublicKeyRSA(priv.PublicKey.E, priv.PublicKey.N)
|
||||
return priv, nil
|
||||
case ECDSAP256SHA256, ECDSAP384SHA384:
|
||||
var c elliptic.Curve
|
||||
switch k.Algorithm {
|
||||
case ECDSAP256SHA256:
|
||||
c = elliptic.P256()
|
||||
case ECDSAP384SHA384:
|
||||
c = elliptic.P384()
|
||||
}
|
||||
priv, err := ecdsa.GenerateKey(c, rand.Reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
k.setPublicKeyECDSA(priv.PublicKey.X, priv.PublicKey.Y)
|
||||
return priv, nil
|
||||
case ED25519:
|
||||
pub, priv, err := ed25519.GenerateKey(rand.Reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
k.setPublicKeyED25519(pub)
|
||||
return priv, nil
|
||||
default:
|
||||
return nil, ErrAlg
|
||||
}
|
||||
}
|
||||
|
||||
// Set the public key (the value E and N)
|
||||
func (k *DNSKEY) setPublicKeyRSA(_E int, _N *big.Int) bool {
|
||||
if _E == 0 || _N == nil {
|
||||
return false
|
||||
}
|
||||
buf := exponentToBuf(_E)
|
||||
buf = append(buf, _N.Bytes()...)
|
||||
k.PublicKey = toBase64(buf)
|
||||
return true
|
||||
}
|
||||
|
||||
// Set the public key for Elliptic Curves
|
||||
func (k *DNSKEY) setPublicKeyECDSA(_X, _Y *big.Int) bool {
|
||||
if _X == nil || _Y == nil {
|
||||
return false
|
||||
}
|
||||
var intlen int
|
||||
switch k.Algorithm {
|
||||
case ECDSAP256SHA256:
|
||||
intlen = 32
|
||||
case ECDSAP384SHA384:
|
||||
intlen = 48
|
||||
}
|
||||
k.PublicKey = toBase64(curveToBuf(_X, _Y, intlen))
|
||||
return true
|
||||
}
|
||||
|
||||
// Set the public key for DSA
|
||||
func (k *DNSKEY) setPublicKeyDSA(_Q, _P, _G, _Y *big.Int) bool {
|
||||
if _Q == nil || _P == nil || _G == nil || _Y == nil {
|
||||
return false
|
||||
}
|
||||
buf := dsaToBuf(_Q, _P, _G, _Y)
|
||||
k.PublicKey = toBase64(buf)
|
||||
return true
|
||||
}
|
||||
|
||||
// Set the public key for Ed25519
|
||||
func (k *DNSKEY) setPublicKeyED25519(_K ed25519.PublicKey) bool {
|
||||
if _K == nil {
|
||||
return false
|
||||
}
|
||||
k.PublicKey = toBase64(_K)
|
||||
return true
|
||||
}
|
||||
|
||||
// Set the public key (the values E and N) for RSA
|
||||
// RFC 3110: Section 2. RSA Public KEY Resource Records
|
||||
func exponentToBuf(_E int) []byte {
|
||||
var buf []byte
|
||||
i := big.NewInt(int64(_E)).Bytes()
|
||||
if len(i) < 256 {
|
||||
buf = make([]byte, 1, 1+len(i))
|
||||
buf[0] = uint8(len(i))
|
||||
} else {
|
||||
buf = make([]byte, 3, 3+len(i))
|
||||
buf[0] = 0
|
||||
buf[1] = uint8(len(i) >> 8)
|
||||
buf[2] = uint8(len(i))
|
||||
}
|
||||
buf = append(buf, i...)
|
||||
return buf
|
||||
}
|
||||
|
||||
// Set the public key for X and Y for Curve. The two
|
||||
// values are just concatenated.
|
||||
func curveToBuf(_X, _Y *big.Int, intlen int) []byte {
|
||||
buf := intToBytes(_X, intlen)
|
||||
buf = append(buf, intToBytes(_Y, intlen)...)
|
||||
return buf
|
||||
}
|
||||
|
||||
// Set the public key for X and Y for Curve. The two
|
||||
// values are just concatenated.
|
||||
func dsaToBuf(_Q, _P, _G, _Y *big.Int) []byte {
|
||||
t := divRoundUp(divRoundUp(_G.BitLen(), 8)-64, 8)
|
||||
buf := []byte{byte(t)}
|
||||
buf = append(buf, intToBytes(_Q, 20)...)
|
||||
buf = append(buf, intToBytes(_P, 64+t*8)...)
|
||||
buf = append(buf, intToBytes(_G, 64+t*8)...)
|
||||
buf = append(buf, intToBytes(_Y, 64+t*8)...)
|
||||
return buf
|
||||
}
|
||||
297
vendor/github.com/miekg/dns/dnssec_keyscan.go
generated
vendored
Normal file
297
vendor/github.com/miekg/dns/dnssec_keyscan.go
generated
vendored
Normal file
@ -0,0 +1,297 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto"
|
||||
"crypto/dsa"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rsa"
|
||||
"io"
|
||||
"math/big"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/crypto/ed25519"
|
||||
)
|
||||
|
||||
// NewPrivateKey returns a PrivateKey by parsing the string s.
|
||||
// s should be in the same form of the BIND private key files.
|
||||
func (k *DNSKEY) NewPrivateKey(s string) (crypto.PrivateKey, error) {
|
||||
if s == "" || s[len(s)-1] != '\n' { // We need a closing newline
|
||||
return k.ReadPrivateKey(strings.NewReader(s+"\n"), "")
|
||||
}
|
||||
return k.ReadPrivateKey(strings.NewReader(s), "")
|
||||
}
|
||||
|
||||
// ReadPrivateKey reads a private key from the io.Reader q. The string file is
|
||||
// only used in error reporting.
|
||||
// The public key must be known, because some cryptographic algorithms embed
|
||||
// the public inside the privatekey.
|
||||
func (k *DNSKEY) ReadPrivateKey(q io.Reader, file string) (crypto.PrivateKey, error) {
|
||||
m, err := parseKey(q, file)
|
||||
if m == nil {
|
||||
return nil, err
|
||||
}
|
||||
if _, ok := m["private-key-format"]; !ok {
|
||||
return nil, ErrPrivKey
|
||||
}
|
||||
if m["private-key-format"] != "v1.2" && m["private-key-format"] != "v1.3" {
|
||||
return nil, ErrPrivKey
|
||||
}
|
||||
// TODO(mg): check if the pubkey matches the private key
|
||||
algo, err := strconv.ParseUint(strings.SplitN(m["algorithm"], " ", 2)[0], 10, 8)
|
||||
if err != nil {
|
||||
return nil, ErrPrivKey
|
||||
}
|
||||
switch uint8(algo) {
|
||||
case DSA:
|
||||
priv, err := readPrivateKeyDSA(m)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pub := k.publicKeyDSA()
|
||||
if pub == nil {
|
||||
return nil, ErrKey
|
||||
}
|
||||
priv.PublicKey = *pub
|
||||
return priv, nil
|
||||
case RSAMD5:
|
||||
fallthrough
|
||||
case RSASHA1:
|
||||
fallthrough
|
||||
case RSASHA1NSEC3SHA1:
|
||||
fallthrough
|
||||
case RSASHA256:
|
||||
fallthrough
|
||||
case RSASHA512:
|
||||
priv, err := readPrivateKeyRSA(m)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pub := k.publicKeyRSA()
|
||||
if pub == nil {
|
||||
return nil, ErrKey
|
||||
}
|
||||
priv.PublicKey = *pub
|
||||
return priv, nil
|
||||
case ECCGOST:
|
||||
return nil, ErrPrivKey
|
||||
case ECDSAP256SHA256:
|
||||
fallthrough
|
||||
case ECDSAP384SHA384:
|
||||
priv, err := readPrivateKeyECDSA(m)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pub := k.publicKeyECDSA()
|
||||
if pub == nil {
|
||||
return nil, ErrKey
|
||||
}
|
||||
priv.PublicKey = *pub
|
||||
return priv, nil
|
||||
case ED25519:
|
||||
return readPrivateKeyED25519(m)
|
||||
default:
|
||||
return nil, ErrPrivKey
|
||||
}
|
||||
}
|
||||
|
||||
// Read a private key (file) string and create a public key. Return the private key.
|
||||
func readPrivateKeyRSA(m map[string]string) (*rsa.PrivateKey, error) {
|
||||
p := new(rsa.PrivateKey)
|
||||
p.Primes = []*big.Int{nil, nil}
|
||||
for k, v := range m {
|
||||
switch k {
|
||||
case "modulus", "publicexponent", "privateexponent", "prime1", "prime2":
|
||||
v1, err := fromBase64([]byte(v))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
switch k {
|
||||
case "modulus":
|
||||
p.PublicKey.N = big.NewInt(0)
|
||||
p.PublicKey.N.SetBytes(v1)
|
||||
case "publicexponent":
|
||||
i := big.NewInt(0)
|
||||
i.SetBytes(v1)
|
||||
p.PublicKey.E = int(i.Int64()) // int64 should be large enough
|
||||
case "privateexponent":
|
||||
p.D = big.NewInt(0)
|
||||
p.D.SetBytes(v1)
|
||||
case "prime1":
|
||||
p.Primes[0] = big.NewInt(0)
|
||||
p.Primes[0].SetBytes(v1)
|
||||
case "prime2":
|
||||
p.Primes[1] = big.NewInt(0)
|
||||
p.Primes[1].SetBytes(v1)
|
||||
}
|
||||
case "exponent1", "exponent2", "coefficient":
|
||||
// not used in Go (yet)
|
||||
case "created", "publish", "activate":
|
||||
// not used in Go (yet)
|
||||
}
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func readPrivateKeyDSA(m map[string]string) (*dsa.PrivateKey, error) {
|
||||
p := new(dsa.PrivateKey)
|
||||
p.X = big.NewInt(0)
|
||||
for k, v := range m {
|
||||
switch k {
|
||||
case "private_value(x)":
|
||||
v1, err := fromBase64([]byte(v))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p.X.SetBytes(v1)
|
||||
case "created", "publish", "activate":
|
||||
/* not used in Go (yet) */
|
||||
}
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func readPrivateKeyECDSA(m map[string]string) (*ecdsa.PrivateKey, error) {
|
||||
p := new(ecdsa.PrivateKey)
|
||||
p.D = big.NewInt(0)
|
||||
// TODO: validate that the required flags are present
|
||||
for k, v := range m {
|
||||
switch k {
|
||||
case "privatekey":
|
||||
v1, err := fromBase64([]byte(v))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p.D.SetBytes(v1)
|
||||
case "created", "publish", "activate":
|
||||
/* not used in Go (yet) */
|
||||
}
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func readPrivateKeyED25519(m map[string]string) (ed25519.PrivateKey, error) {
|
||||
var p ed25519.PrivateKey
|
||||
// TODO: validate that the required flags are present
|
||||
for k, v := range m {
|
||||
switch k {
|
||||
case "privatekey":
|
||||
p1, err := fromBase64([]byte(v))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(p1) != 32 {
|
||||
return nil, ErrPrivKey
|
||||
}
|
||||
// RFC 8080 and Golang's x/crypto/ed25519 differ as to how the
|
||||
// private keys are represented. RFC 8080 specifies that private
|
||||
// keys be stored solely as the seed value (p1 above) while the
|
||||
// ed25519 package represents them as the seed value concatenated
|
||||
// to the public key, which is derived from the seed value.
|
||||
//
|
||||
// ed25519.GenerateKey reads exactly 32 bytes from the passed in
|
||||
// io.Reader and uses them as the seed. It also derives the
|
||||
// public key and produces a compatible private key.
|
||||
_, p, err = ed25519.GenerateKey(bytes.NewReader(p1))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
case "created", "publish", "activate":
|
||||
/* not used in Go (yet) */
|
||||
}
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
// parseKey reads a private key from r. It returns a map[string]string,
|
||||
// with the key-value pairs, or an error when the file is not correct.
|
||||
func parseKey(r io.Reader, file string) (map[string]string, error) {
|
||||
s, cancel := scanInit(r)
|
||||
m := make(map[string]string)
|
||||
c := make(chan lex)
|
||||
k := ""
|
||||
defer func() {
|
||||
cancel()
|
||||
// zlexer can send up to two tokens, the next one and possibly 1 remainders.
|
||||
// Do a non-blocking read.
|
||||
_, ok := <-c
|
||||
_, ok = <-c
|
||||
if !ok {
|
||||
// too bad
|
||||
}
|
||||
}()
|
||||
// Start the lexer
|
||||
go klexer(s, c)
|
||||
for l := range c {
|
||||
// It should alternate
|
||||
switch l.value {
|
||||
case zKey:
|
||||
k = l.token
|
||||
case zValue:
|
||||
if k == "" {
|
||||
return nil, &ParseError{file, "no private key seen", l}
|
||||
}
|
||||
//println("Setting", strings.ToLower(k), "to", l.token, "b")
|
||||
m[strings.ToLower(k)] = l.token
|
||||
k = ""
|
||||
}
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// klexer scans the sourcefile and returns tokens on the channel c.
|
||||
func klexer(s *scan, c chan lex) {
|
||||
var l lex
|
||||
str := "" // Hold the current read text
|
||||
commt := false
|
||||
key := true
|
||||
x, err := s.tokenText()
|
||||
defer close(c)
|
||||
for err == nil {
|
||||
l.column = s.position.Column
|
||||
l.line = s.position.Line
|
||||
switch x {
|
||||
case ':':
|
||||
if commt {
|
||||
break
|
||||
}
|
||||
l.token = str
|
||||
if key {
|
||||
l.value = zKey
|
||||
c <- l
|
||||
// Next token is a space, eat it
|
||||
s.tokenText()
|
||||
key = false
|
||||
str = ""
|
||||
} else {
|
||||
l.value = zValue
|
||||
}
|
||||
case ';':
|
||||
commt = true
|
||||
case '\n':
|
||||
if commt {
|
||||
// Reset a comment
|
||||
commt = false
|
||||
}
|
||||
l.value = zValue
|
||||
l.token = str
|
||||
c <- l
|
||||
str = ""
|
||||
commt = false
|
||||
key = true
|
||||
default:
|
||||
if commt {
|
||||
break
|
||||
}
|
||||
str += string(x)
|
||||
}
|
||||
x, err = s.tokenText()
|
||||
}
|
||||
if len(str) > 0 {
|
||||
// Send remainder
|
||||
l.token = str
|
||||
l.value = zValue
|
||||
c <- l
|
||||
}
|
||||
}
|
||||
93
vendor/github.com/miekg/dns/dnssec_privkey.go
generated
vendored
Normal file
93
vendor/github.com/miekg/dns/dnssec_privkey.go
generated
vendored
Normal file
@ -0,0 +1,93 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/dsa"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rsa"
|
||||
"math/big"
|
||||
"strconv"
|
||||
|
||||
"golang.org/x/crypto/ed25519"
|
||||
)
|
||||
|
||||
const format = "Private-key-format: v1.3\n"
|
||||
|
||||
// PrivateKeyString converts a PrivateKey to a string. This string has the same
|
||||
// format as the private-key-file of BIND9 (Private-key-format: v1.3).
|
||||
// It needs some info from the key (the algorithm), so its a method of the DNSKEY
|
||||
// It supports rsa.PrivateKey, ecdsa.PrivateKey and dsa.PrivateKey
|
||||
func (r *DNSKEY) PrivateKeyString(p crypto.PrivateKey) string {
|
||||
algorithm := strconv.Itoa(int(r.Algorithm))
|
||||
algorithm += " (" + AlgorithmToString[r.Algorithm] + ")"
|
||||
|
||||
switch p := p.(type) {
|
||||
case *rsa.PrivateKey:
|
||||
modulus := toBase64(p.PublicKey.N.Bytes())
|
||||
e := big.NewInt(int64(p.PublicKey.E))
|
||||
publicExponent := toBase64(e.Bytes())
|
||||
privateExponent := toBase64(p.D.Bytes())
|
||||
prime1 := toBase64(p.Primes[0].Bytes())
|
||||
prime2 := toBase64(p.Primes[1].Bytes())
|
||||
// Calculate Exponent1/2 and Coefficient as per: http://en.wikipedia.org/wiki/RSA#Using_the_Chinese_remainder_algorithm
|
||||
// and from: http://code.google.com/p/go/issues/detail?id=987
|
||||
one := big.NewInt(1)
|
||||
p1 := big.NewInt(0).Sub(p.Primes[0], one)
|
||||
q1 := big.NewInt(0).Sub(p.Primes[1], one)
|
||||
exp1 := big.NewInt(0).Mod(p.D, p1)
|
||||
exp2 := big.NewInt(0).Mod(p.D, q1)
|
||||
coeff := big.NewInt(0).ModInverse(p.Primes[1], p.Primes[0])
|
||||
|
||||
exponent1 := toBase64(exp1.Bytes())
|
||||
exponent2 := toBase64(exp2.Bytes())
|
||||
coefficient := toBase64(coeff.Bytes())
|
||||
|
||||
return format +
|
||||
"Algorithm: " + algorithm + "\n" +
|
||||
"Modulus: " + modulus + "\n" +
|
||||
"PublicExponent: " + publicExponent + "\n" +
|
||||
"PrivateExponent: " + privateExponent + "\n" +
|
||||
"Prime1: " + prime1 + "\n" +
|
||||
"Prime2: " + prime2 + "\n" +
|
||||
"Exponent1: " + exponent1 + "\n" +
|
||||
"Exponent2: " + exponent2 + "\n" +
|
||||
"Coefficient: " + coefficient + "\n"
|
||||
|
||||
case *ecdsa.PrivateKey:
|
||||
var intlen int
|
||||
switch r.Algorithm {
|
||||
case ECDSAP256SHA256:
|
||||
intlen = 32
|
||||
case ECDSAP384SHA384:
|
||||
intlen = 48
|
||||
}
|
||||
private := toBase64(intToBytes(p.D, intlen))
|
||||
return format +
|
||||
"Algorithm: " + algorithm + "\n" +
|
||||
"PrivateKey: " + private + "\n"
|
||||
|
||||
case *dsa.PrivateKey:
|
||||
T := divRoundUp(divRoundUp(p.PublicKey.Parameters.G.BitLen(), 8)-64, 8)
|
||||
prime := toBase64(intToBytes(p.PublicKey.Parameters.P, 64+T*8))
|
||||
subprime := toBase64(intToBytes(p.PublicKey.Parameters.Q, 20))
|
||||
base := toBase64(intToBytes(p.PublicKey.Parameters.G, 64+T*8))
|
||||
priv := toBase64(intToBytes(p.X, 20))
|
||||
pub := toBase64(intToBytes(p.PublicKey.Y, 64+T*8))
|
||||
return format +
|
||||
"Algorithm: " + algorithm + "\n" +
|
||||
"Prime(p): " + prime + "\n" +
|
||||
"Subprime(q): " + subprime + "\n" +
|
||||
"Base(g): " + base + "\n" +
|
||||
"Private_value(x): " + priv + "\n" +
|
||||
"Public_value(y): " + pub + "\n"
|
||||
|
||||
case ed25519.PrivateKey:
|
||||
private := toBase64(p[:32])
|
||||
return format +
|
||||
"Algorithm: " + algorithm + "\n" +
|
||||
"PrivateKey: " + private + "\n"
|
||||
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
272
vendor/github.com/miekg/dns/doc.go
generated
vendored
Normal file
272
vendor/github.com/miekg/dns/doc.go
generated
vendored
Normal file
@ -0,0 +1,272 @@
|
||||
/*
|
||||
Package dns implements a full featured interface to the Domain Name System.
|
||||
Server- and client-side programming is supported.
|
||||
The package allows complete control over what is sent out to the DNS. The package
|
||||
API follows the less-is-more principle, by presenting a small, clean interface.
|
||||
|
||||
The package dns supports (asynchronous) querying/replying, incoming/outgoing zone transfers,
|
||||
TSIG, EDNS0, dynamic updates, notifies and DNSSEC validation/signing.
|
||||
Note that domain names MUST be fully qualified, before sending them, unqualified
|
||||
names in a message will result in a packing failure.
|
||||
|
||||
Resource records are native types. They are not stored in wire format.
|
||||
Basic usage pattern for creating a new resource record:
|
||||
|
||||
r := new(dns.MX)
|
||||
r.Hdr = dns.RR_Header{Name: "miek.nl.", Rrtype: dns.TypeMX,
|
||||
Class: dns.ClassINET, Ttl: 3600}
|
||||
r.Preference = 10
|
||||
r.Mx = "mx.miek.nl."
|
||||
|
||||
Or directly from a string:
|
||||
|
||||
mx, err := dns.NewRR("miek.nl. 3600 IN MX 10 mx.miek.nl.")
|
||||
|
||||
Or when the default origin (.) and TTL (3600) and class (IN) suit you:
|
||||
|
||||
mx, err := dns.NewRR("miek.nl MX 10 mx.miek.nl")
|
||||
|
||||
Or even:
|
||||
|
||||
mx, err := dns.NewRR("$ORIGIN nl.\nmiek 1H IN MX 10 mx.miek")
|
||||
|
||||
In the DNS messages are exchanged, these messages contain resource
|
||||
records (sets). Use pattern for creating a message:
|
||||
|
||||
m := new(dns.Msg)
|
||||
m.SetQuestion("miek.nl.", dns.TypeMX)
|
||||
|
||||
Or when not certain if the domain name is fully qualified:
|
||||
|
||||
m.SetQuestion(dns.Fqdn("miek.nl"), dns.TypeMX)
|
||||
|
||||
The message m is now a message with the question section set to ask
|
||||
the MX records for the miek.nl. zone.
|
||||
|
||||
The following is slightly more verbose, but more flexible:
|
||||
|
||||
m1 := new(dns.Msg)
|
||||
m1.Id = dns.Id()
|
||||
m1.RecursionDesired = true
|
||||
m1.Question = make([]dns.Question, 1)
|
||||
m1.Question[0] = dns.Question{"miek.nl.", dns.TypeMX, dns.ClassINET}
|
||||
|
||||
After creating a message it can be sent.
|
||||
Basic use pattern for synchronous querying the DNS at a
|
||||
server configured on 127.0.0.1 and port 53:
|
||||
|
||||
c := new(dns.Client)
|
||||
in, rtt, err := c.Exchange(m1, "127.0.0.1:53")
|
||||
|
||||
Suppressing multiple outstanding queries (with the same question, type and
|
||||
class) is as easy as setting:
|
||||
|
||||
c.SingleInflight = true
|
||||
|
||||
More advanced options are available using a net.Dialer and the corresponding API.
|
||||
For example it is possible to set a timeout, or to specify a source IP address
|
||||
and port to use for the connection:
|
||||
|
||||
c := new(dns.Client)
|
||||
laddr := net.UDPAddr{
|
||||
IP: net.ParseIP("[::1]"),
|
||||
Port: 12345,
|
||||
Zone: "",
|
||||
}
|
||||
d := net.Dialer{
|
||||
Timeout: 200 * time.Millisecond,
|
||||
LocalAddr: &laddr,
|
||||
}
|
||||
in, rtt, err := c.ExchangeWithDialer(&d, m1, "8.8.8.8:53")
|
||||
|
||||
If these "advanced" features are not needed, a simple UDP query can be sent,
|
||||
with:
|
||||
|
||||
in, err := dns.Exchange(m1, "127.0.0.1:53")
|
||||
|
||||
When this functions returns you will get dns message. A dns message consists
|
||||
out of four sections.
|
||||
The question section: in.Question, the answer section: in.Answer,
|
||||
the authority section: in.Ns and the additional section: in.Extra.
|
||||
|
||||
Each of these sections (except the Question section) contain a []RR. Basic
|
||||
use pattern for accessing the rdata of a TXT RR as the first RR in
|
||||
the Answer section:
|
||||
|
||||
if t, ok := in.Answer[0].(*dns.TXT); ok {
|
||||
// do something with t.Txt
|
||||
}
|
||||
|
||||
Domain Name and TXT Character String Representations
|
||||
|
||||
Both domain names and TXT character strings are converted to presentation
|
||||
form both when unpacked and when converted to strings.
|
||||
|
||||
For TXT character strings, tabs, carriage returns and line feeds will be
|
||||
converted to \t, \r and \n respectively. Back slashes and quotations marks
|
||||
will be escaped. Bytes below 32 and above 127 will be converted to \DDD
|
||||
form.
|
||||
|
||||
For domain names, in addition to the above rules brackets, periods,
|
||||
spaces, semicolons and the at symbol are escaped.
|
||||
|
||||
DNSSEC
|
||||
|
||||
DNSSEC (DNS Security Extension) adds a layer of security to the DNS. It
|
||||
uses public key cryptography to sign resource records. The
|
||||
public keys are stored in DNSKEY records and the signatures in RRSIG records.
|
||||
|
||||
Requesting DNSSEC information for a zone is done by adding the DO (DNSSEC OK) bit
|
||||
to a request.
|
||||
|
||||
m := new(dns.Msg)
|
||||
m.SetEdns0(4096, true)
|
||||
|
||||
Signature generation, signature verification and key generation are all supported.
|
||||
|
||||
DYNAMIC UPDATES
|
||||
|
||||
Dynamic updates reuses the DNS message format, but renames three of
|
||||
the sections. Question is Zone, Answer is Prerequisite, Authority is
|
||||
Update, only the Additional is not renamed. See RFC 2136 for the gory details.
|
||||
|
||||
You can set a rather complex set of rules for the existence of absence of
|
||||
certain resource records or names in a zone to specify if resource records
|
||||
should be added or removed. The table from RFC 2136 supplemented with the Go
|
||||
DNS function shows which functions exist to specify the prerequisites.
|
||||
|
||||
3.2.4 - Table Of Metavalues Used In Prerequisite Section
|
||||
|
||||
CLASS TYPE RDATA Meaning Function
|
||||
--------------------------------------------------------------
|
||||
ANY ANY empty Name is in use dns.NameUsed
|
||||
ANY rrset empty RRset exists (value indep) dns.RRsetUsed
|
||||
NONE ANY empty Name is not in use dns.NameNotUsed
|
||||
NONE rrset empty RRset does not exist dns.RRsetNotUsed
|
||||
zone rrset rr RRset exists (value dep) dns.Used
|
||||
|
||||
The prerequisite section can also be left empty.
|
||||
If you have decided on the prerequisites you can tell what RRs should
|
||||
be added or deleted. The next table shows the options you have and
|
||||
what functions to call.
|
||||
|
||||
3.4.2.6 - Table Of Metavalues Used In Update Section
|
||||
|
||||
CLASS TYPE RDATA Meaning Function
|
||||
---------------------------------------------------------------
|
||||
ANY ANY empty Delete all RRsets from name dns.RemoveName
|
||||
ANY rrset empty Delete an RRset dns.RemoveRRset
|
||||
NONE rrset rr Delete an RR from RRset dns.Remove
|
||||
zone rrset rr Add to an RRset dns.Insert
|
||||
|
||||
TRANSACTION SIGNATURE
|
||||
|
||||
An TSIG or transaction signature adds a HMAC TSIG record to each message sent.
|
||||
The supported algorithms include: HmacMD5, HmacSHA1, HmacSHA256 and HmacSHA512.
|
||||
|
||||
Basic use pattern when querying with a TSIG name "axfr." (note that these key names
|
||||
must be fully qualified - as they are domain names) and the base64 secret
|
||||
"so6ZGir4GPAqINNh9U5c3A==":
|
||||
|
||||
If an incoming message contains a TSIG record it MUST be the last record in
|
||||
the additional section (RFC2845 3.2). This means that you should make the
|
||||
call to SetTsig last, right before executing the query. If you make any
|
||||
changes to the RRset after calling SetTsig() the signature will be incorrect.
|
||||
|
||||
c := new(dns.Client)
|
||||
c.TsigSecret = map[string]string{"axfr.": "so6ZGir4GPAqINNh9U5c3A=="}
|
||||
m := new(dns.Msg)
|
||||
m.SetQuestion("miek.nl.", dns.TypeMX)
|
||||
m.SetTsig("axfr.", dns.HmacMD5, 300, time.Now().Unix())
|
||||
...
|
||||
// When sending the TSIG RR is calculated and filled in before sending
|
||||
|
||||
When requesting an zone transfer (almost all TSIG usage is when requesting zone transfers), with
|
||||
TSIG, this is the basic use pattern. In this example we request an AXFR for
|
||||
miek.nl. with TSIG key named "axfr." and secret "so6ZGir4GPAqINNh9U5c3A=="
|
||||
and using the server 176.58.119.54:
|
||||
|
||||
t := new(dns.Transfer)
|
||||
m := new(dns.Msg)
|
||||
t.TsigSecret = map[string]string{"axfr.": "so6ZGir4GPAqINNh9U5c3A=="}
|
||||
m.SetAxfr("miek.nl.")
|
||||
m.SetTsig("axfr.", dns.HmacMD5, 300, time.Now().Unix())
|
||||
c, err := t.In(m, "176.58.119.54:53")
|
||||
for r := range c { ... }
|
||||
|
||||
You can now read the records from the transfer as they come in. Each envelope is checked with TSIG.
|
||||
If something is not correct an error is returned.
|
||||
|
||||
Basic use pattern validating and replying to a message that has TSIG set.
|
||||
|
||||
server := &dns.Server{Addr: ":53", Net: "udp"}
|
||||
server.TsigSecret = map[string]string{"axfr.": "so6ZGir4GPAqINNh9U5c3A=="}
|
||||
go server.ListenAndServe()
|
||||
dns.HandleFunc(".", handleRequest)
|
||||
|
||||
func handleRequest(w dns.ResponseWriter, r *dns.Msg) {
|
||||
m := new(dns.Msg)
|
||||
m.SetReply(r)
|
||||
if r.IsTsig() != nil {
|
||||
if w.TsigStatus() == nil {
|
||||
// *Msg r has an TSIG record and it was validated
|
||||
m.SetTsig("axfr.", dns.HmacMD5, 300, time.Now().Unix())
|
||||
} else {
|
||||
// *Msg r has an TSIG records and it was not valided
|
||||
}
|
||||
}
|
||||
w.WriteMsg(m)
|
||||
}
|
||||
|
||||
PRIVATE RRS
|
||||
|
||||
RFC 6895 sets aside a range of type codes for private use. This range
|
||||
is 65,280 - 65,534 (0xFF00 - 0xFFFE). When experimenting with new Resource Records these
|
||||
can be used, before requesting an official type code from IANA.
|
||||
|
||||
see http://miek.nl/2014/September/21/idn-and-private-rr-in-go-dns/ for more
|
||||
information.
|
||||
|
||||
EDNS0
|
||||
|
||||
EDNS0 is an extension mechanism for the DNS defined in RFC 2671 and updated
|
||||
by RFC 6891. It defines an new RR type, the OPT RR, which is then completely
|
||||
abused.
|
||||
Basic use pattern for creating an (empty) OPT RR:
|
||||
|
||||
o := new(dns.OPT)
|
||||
o.Hdr.Name = "." // MUST be the root zone, per definition.
|
||||
o.Hdr.Rrtype = dns.TypeOPT
|
||||
|
||||
The rdata of an OPT RR consists out of a slice of EDNS0 (RFC 6891)
|
||||
interfaces. Currently only a few have been standardized: EDNS0_NSID
|
||||
(RFC 5001) and EDNS0_SUBNET (draft-vandergaast-edns-client-subnet-02). Note
|
||||
that these options may be combined in an OPT RR.
|
||||
Basic use pattern for a server to check if (and which) options are set:
|
||||
|
||||
// o is a dns.OPT
|
||||
for _, s := range o.Option {
|
||||
switch e := s.(type) {
|
||||
case *dns.EDNS0_NSID:
|
||||
// do stuff with e.Nsid
|
||||
case *dns.EDNS0_SUBNET:
|
||||
// access e.Family, e.Address, etc.
|
||||
}
|
||||
}
|
||||
|
||||
SIG(0)
|
||||
|
||||
From RFC 2931:
|
||||
|
||||
SIG(0) provides protection for DNS transactions and requests ....
|
||||
... protection for glue records, DNS requests, protection for message headers
|
||||
on requests and responses, and protection of the overall integrity of a response.
|
||||
|
||||
It works like TSIG, except that SIG(0) uses public key cryptography, instead of the shared
|
||||
secret approach in TSIG.
|
||||
Supported algorithms: DSA, ECDSAP256SHA256, ECDSAP384SHA384, RSASHA1, RSASHA256 and
|
||||
RSASHA512.
|
||||
|
||||
Signing subsequent messages in multi-message sessions is not implemented.
|
||||
*/
|
||||
package dns
|
||||
627
vendor/github.com/miekg/dns/edns.go
generated
vendored
Normal file
627
vendor/github.com/miekg/dns/edns.go
generated
vendored
Normal file
@ -0,0 +1,627 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// EDNS0 Option codes.
|
||||
const (
|
||||
EDNS0LLQ = 0x1 // long lived queries: http://tools.ietf.org/html/draft-sekar-dns-llq-01
|
||||
EDNS0UL = 0x2 // update lease draft: http://files.dns-sd.org/draft-sekar-dns-ul.txt
|
||||
EDNS0NSID = 0x3 // nsid (See RFC 5001)
|
||||
EDNS0DAU = 0x5 // DNSSEC Algorithm Understood
|
||||
EDNS0DHU = 0x6 // DS Hash Understood
|
||||
EDNS0N3U = 0x7 // NSEC3 Hash Understood
|
||||
EDNS0SUBNET = 0x8 // client-subnet (See RFC 7871)
|
||||
EDNS0EXPIRE = 0x9 // EDNS0 expire
|
||||
EDNS0COOKIE = 0xa // EDNS0 Cookie
|
||||
EDNS0TCPKEEPALIVE = 0xb // EDNS0 tcp keep alive (See RFC 7828)
|
||||
EDNS0PADDING = 0xc // EDNS0 padding (See RFC 7830)
|
||||
EDNS0LOCALSTART = 0xFDE9 // Beginning of range reserved for local/experimental use (See RFC 6891)
|
||||
EDNS0LOCALEND = 0xFFFE // End of range reserved for local/experimental use (See RFC 6891)
|
||||
_DO = 1 << 15 // DNSSEC OK
|
||||
)
|
||||
|
||||
// OPT is the EDNS0 RR appended to messages to convey extra (meta) information.
|
||||
// See RFC 6891.
|
||||
type OPT struct {
|
||||
Hdr RR_Header
|
||||
Option []EDNS0 `dns:"opt"`
|
||||
}
|
||||
|
||||
func (rr *OPT) String() string {
|
||||
s := "\n;; OPT PSEUDOSECTION:\n; EDNS: version " + strconv.Itoa(int(rr.Version())) + "; "
|
||||
if rr.Do() {
|
||||
s += "flags: do; "
|
||||
} else {
|
||||
s += "flags: ; "
|
||||
}
|
||||
s += "udp: " + strconv.Itoa(int(rr.UDPSize()))
|
||||
|
||||
for _, o := range rr.Option {
|
||||
switch o.(type) {
|
||||
case *EDNS0_NSID:
|
||||
s += "\n; NSID: " + o.String()
|
||||
h, e := o.pack()
|
||||
var r string
|
||||
if e == nil {
|
||||
for _, c := range h {
|
||||
r += "(" + string(c) + ")"
|
||||
}
|
||||
s += " " + r
|
||||
}
|
||||
case *EDNS0_SUBNET:
|
||||
s += "\n; SUBNET: " + o.String()
|
||||
case *EDNS0_COOKIE:
|
||||
s += "\n; COOKIE: " + o.String()
|
||||
case *EDNS0_UL:
|
||||
s += "\n; UPDATE LEASE: " + o.String()
|
||||
case *EDNS0_LLQ:
|
||||
s += "\n; LONG LIVED QUERIES: " + o.String()
|
||||
case *EDNS0_DAU:
|
||||
s += "\n; DNSSEC ALGORITHM UNDERSTOOD: " + o.String()
|
||||
case *EDNS0_DHU:
|
||||
s += "\n; DS HASH UNDERSTOOD: " + o.String()
|
||||
case *EDNS0_N3U:
|
||||
s += "\n; NSEC3 HASH UNDERSTOOD: " + o.String()
|
||||
case *EDNS0_LOCAL:
|
||||
s += "\n; LOCAL OPT: " + o.String()
|
||||
case *EDNS0_PADDING:
|
||||
s += "\n; PADDING: " + o.String()
|
||||
}
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (rr *OPT) len() int {
|
||||
l := rr.Hdr.len()
|
||||
for i := 0; i < len(rr.Option); i++ {
|
||||
l += 4 // Account for 2-byte option code and 2-byte option length.
|
||||
lo, _ := rr.Option[i].pack()
|
||||
l += len(lo)
|
||||
}
|
||||
return l
|
||||
}
|
||||
|
||||
// return the old value -> delete SetVersion?
|
||||
|
||||
// Version returns the EDNS version used. Only zero is defined.
|
||||
func (rr *OPT) Version() uint8 {
|
||||
return uint8((rr.Hdr.Ttl & 0x00FF0000) >> 16)
|
||||
}
|
||||
|
||||
// SetVersion sets the version of EDNS. This is usually zero.
|
||||
func (rr *OPT) SetVersion(v uint8) {
|
||||
rr.Hdr.Ttl = rr.Hdr.Ttl&0xFF00FFFF | (uint32(v) << 16)
|
||||
}
|
||||
|
||||
// ExtendedRcode returns the EDNS extended RCODE field (the upper 8 bits of the TTL).
|
||||
func (rr *OPT) ExtendedRcode() int {
|
||||
return int((rr.Hdr.Ttl & 0xFF000000) >> 24)
|
||||
}
|
||||
|
||||
// SetExtendedRcode sets the EDNS extended RCODE field.
|
||||
func (rr *OPT) SetExtendedRcode(v uint8) {
|
||||
rr.Hdr.Ttl = rr.Hdr.Ttl&0x00FFFFFF | (uint32(v) << 24)
|
||||
}
|
||||
|
||||
// UDPSize returns the UDP buffer size.
|
||||
func (rr *OPT) UDPSize() uint16 {
|
||||
return rr.Hdr.Class
|
||||
}
|
||||
|
||||
// SetUDPSize sets the UDP buffer size.
|
||||
func (rr *OPT) SetUDPSize(size uint16) {
|
||||
rr.Hdr.Class = size
|
||||
}
|
||||
|
||||
// Do returns the value of the DO (DNSSEC OK) bit.
|
||||
func (rr *OPT) Do() bool {
|
||||
return rr.Hdr.Ttl&_DO == _DO
|
||||
}
|
||||
|
||||
// SetDo sets the DO (DNSSEC OK) bit.
|
||||
// If we pass an argument, set the DO bit to that value.
|
||||
// It is possible to pass 2 or more arguments. Any arguments after the 1st is silently ignored.
|
||||
func (rr *OPT) SetDo(do ...bool) {
|
||||
if len(do) == 1 {
|
||||
if do[0] {
|
||||
rr.Hdr.Ttl |= _DO
|
||||
} else {
|
||||
rr.Hdr.Ttl &^= _DO
|
||||
}
|
||||
} else {
|
||||
rr.Hdr.Ttl |= _DO
|
||||
}
|
||||
}
|
||||
|
||||
// EDNS0 defines an EDNS0 Option. An OPT RR can have multiple options appended to it.
|
||||
type EDNS0 interface {
|
||||
// Option returns the option code for the option.
|
||||
Option() uint16
|
||||
// pack returns the bytes of the option data.
|
||||
pack() ([]byte, error)
|
||||
// unpack sets the data as found in the buffer. Is also sets
|
||||
// the length of the slice as the length of the option data.
|
||||
unpack([]byte) error
|
||||
// String returns the string representation of the option.
|
||||
String() string
|
||||
}
|
||||
|
||||
// EDNS0_NSID option is used to retrieve a nameserver
|
||||
// identifier. When sending a request Nsid must be set to the empty string
|
||||
// The identifier is an opaque string encoded as hex.
|
||||
// Basic use pattern for creating an nsid option:
|
||||
//
|
||||
// o := new(dns.OPT)
|
||||
// o.Hdr.Name = "."
|
||||
// o.Hdr.Rrtype = dns.TypeOPT
|
||||
// e := new(dns.EDNS0_NSID)
|
||||
// e.Code = dns.EDNS0NSID
|
||||
// e.Nsid = "AA"
|
||||
// o.Option = append(o.Option, e)
|
||||
type EDNS0_NSID struct {
|
||||
Code uint16 // Always EDNS0NSID
|
||||
Nsid string // This string needs to be hex encoded
|
||||
}
|
||||
|
||||
func (e *EDNS0_NSID) pack() ([]byte, error) {
|
||||
h, err := hex.DecodeString(e.Nsid)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return h, nil
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_NSID) Option() uint16 { return EDNS0NSID } // Option returns the option code.
|
||||
func (e *EDNS0_NSID) unpack(b []byte) error { e.Nsid = hex.EncodeToString(b); return nil }
|
||||
func (e *EDNS0_NSID) String() string { return string(e.Nsid) }
|
||||
|
||||
// EDNS0_SUBNET is the subnet option that is used to give the remote nameserver
|
||||
// an idea of where the client lives. See RFC 7871. It can then give back a different
|
||||
// answer depending on the location or network topology.
|
||||
// Basic use pattern for creating an subnet option:
|
||||
//
|
||||
// o := new(dns.OPT)
|
||||
// o.Hdr.Name = "."
|
||||
// o.Hdr.Rrtype = dns.TypeOPT
|
||||
// e := new(dns.EDNS0_SUBNET)
|
||||
// e.Code = dns.EDNS0SUBNET
|
||||
// e.Family = 1 // 1 for IPv4 source address, 2 for IPv6
|
||||
// e.SourceNetmask = 32 // 32 for IPV4, 128 for IPv6
|
||||
// e.SourceScope = 0
|
||||
// e.Address = net.ParseIP("127.0.0.1").To4() // for IPv4
|
||||
// // e.Address = net.ParseIP("2001:7b8:32a::2") // for IPV6
|
||||
// o.Option = append(o.Option, e)
|
||||
//
|
||||
// This code will parse all the available bits when unpacking (up to optlen).
|
||||
// When packing it will apply SourceNetmask. If you need more advanced logic,
|
||||
// patches welcome and good luck.
|
||||
type EDNS0_SUBNET struct {
|
||||
Code uint16 // Always EDNS0SUBNET
|
||||
Family uint16 // 1 for IP, 2 for IP6
|
||||
SourceNetmask uint8
|
||||
SourceScope uint8
|
||||
Address net.IP
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_SUBNET) Option() uint16 { return EDNS0SUBNET }
|
||||
|
||||
func (e *EDNS0_SUBNET) pack() ([]byte, error) {
|
||||
b := make([]byte, 4)
|
||||
binary.BigEndian.PutUint16(b[0:], e.Family)
|
||||
b[2] = e.SourceNetmask
|
||||
b[3] = e.SourceScope
|
||||
switch e.Family {
|
||||
case 0:
|
||||
// "dig" sets AddressFamily to 0 if SourceNetmask is also 0
|
||||
// We might don't need to complain either
|
||||
if e.SourceNetmask != 0 {
|
||||
return nil, errors.New("dns: bad address family")
|
||||
}
|
||||
case 1:
|
||||
if e.SourceNetmask > net.IPv4len*8 {
|
||||
return nil, errors.New("dns: bad netmask")
|
||||
}
|
||||
if len(e.Address.To4()) != net.IPv4len {
|
||||
return nil, errors.New("dns: bad address")
|
||||
}
|
||||
ip := e.Address.To4().Mask(net.CIDRMask(int(e.SourceNetmask), net.IPv4len*8))
|
||||
needLength := (e.SourceNetmask + 8 - 1) / 8 // division rounding up
|
||||
b = append(b, ip[:needLength]...)
|
||||
case 2:
|
||||
if e.SourceNetmask > net.IPv6len*8 {
|
||||
return nil, errors.New("dns: bad netmask")
|
||||
}
|
||||
if len(e.Address) != net.IPv6len {
|
||||
return nil, errors.New("dns: bad address")
|
||||
}
|
||||
ip := e.Address.Mask(net.CIDRMask(int(e.SourceNetmask), net.IPv6len*8))
|
||||
needLength := (e.SourceNetmask + 8 - 1) / 8 // division rounding up
|
||||
b = append(b, ip[:needLength]...)
|
||||
default:
|
||||
return nil, errors.New("dns: bad address family")
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (e *EDNS0_SUBNET) unpack(b []byte) error {
|
||||
if len(b) < 4 {
|
||||
return ErrBuf
|
||||
}
|
||||
e.Family = binary.BigEndian.Uint16(b)
|
||||
e.SourceNetmask = b[2]
|
||||
e.SourceScope = b[3]
|
||||
switch e.Family {
|
||||
case 0:
|
||||
// "dig" sets AddressFamily to 0 if SourceNetmask is also 0
|
||||
// It's okay to accept such a packet
|
||||
if e.SourceNetmask != 0 {
|
||||
return errors.New("dns: bad address family")
|
||||
}
|
||||
e.Address = net.IPv4(0, 0, 0, 0)
|
||||
case 1:
|
||||
if e.SourceNetmask > net.IPv4len*8 || e.SourceScope > net.IPv4len*8 {
|
||||
return errors.New("dns: bad netmask")
|
||||
}
|
||||
addr := make([]byte, net.IPv4len)
|
||||
for i := 0; i < net.IPv4len && 4+i < len(b); i++ {
|
||||
addr[i] = b[4+i]
|
||||
}
|
||||
e.Address = net.IPv4(addr[0], addr[1], addr[2], addr[3])
|
||||
case 2:
|
||||
if e.SourceNetmask > net.IPv6len*8 || e.SourceScope > net.IPv6len*8 {
|
||||
return errors.New("dns: bad netmask")
|
||||
}
|
||||
addr := make([]byte, net.IPv6len)
|
||||
for i := 0; i < net.IPv6len && 4+i < len(b); i++ {
|
||||
addr[i] = b[4+i]
|
||||
}
|
||||
e.Address = net.IP{addr[0], addr[1], addr[2], addr[3], addr[4],
|
||||
addr[5], addr[6], addr[7], addr[8], addr[9], addr[10],
|
||||
addr[11], addr[12], addr[13], addr[14], addr[15]}
|
||||
default:
|
||||
return errors.New("dns: bad address family")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *EDNS0_SUBNET) String() (s string) {
|
||||
if e.Address == nil {
|
||||
s = "<nil>"
|
||||
} else if e.Address.To4() != nil {
|
||||
s = e.Address.String()
|
||||
} else {
|
||||
s = "[" + e.Address.String() + "]"
|
||||
}
|
||||
s += "/" + strconv.Itoa(int(e.SourceNetmask)) + "/" + strconv.Itoa(int(e.SourceScope))
|
||||
return
|
||||
}
|
||||
|
||||
// The EDNS0_COOKIE option is used to add a DNS Cookie to a message.
|
||||
//
|
||||
// o := new(dns.OPT)
|
||||
// o.Hdr.Name = "."
|
||||
// o.Hdr.Rrtype = dns.TypeOPT
|
||||
// e := new(dns.EDNS0_COOKIE)
|
||||
// e.Code = dns.EDNS0COOKIE
|
||||
// e.Cookie = "24a5ac.."
|
||||
// o.Option = append(o.Option, e)
|
||||
//
|
||||
// The Cookie field consists out of a client cookie (RFC 7873 Section 4), that is
|
||||
// always 8 bytes. It may then optionally be followed by the server cookie. The server
|
||||
// cookie is of variable length, 8 to a maximum of 32 bytes. In other words:
|
||||
//
|
||||
// cCookie := o.Cookie[:16]
|
||||
// sCookie := o.Cookie[16:]
|
||||
//
|
||||
// There is no guarantee that the Cookie string has a specific length.
|
||||
type EDNS0_COOKIE struct {
|
||||
Code uint16 // Always EDNS0COOKIE
|
||||
Cookie string // Hex-encoded cookie data
|
||||
}
|
||||
|
||||
func (e *EDNS0_COOKIE) pack() ([]byte, error) {
|
||||
h, err := hex.DecodeString(e.Cookie)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return h, nil
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_COOKIE) Option() uint16 { return EDNS0COOKIE }
|
||||
func (e *EDNS0_COOKIE) unpack(b []byte) error { e.Cookie = hex.EncodeToString(b); return nil }
|
||||
func (e *EDNS0_COOKIE) String() string { return e.Cookie }
|
||||
|
||||
// The EDNS0_UL (Update Lease) (draft RFC) option is used to tell the server to set
|
||||
// an expiration on an update RR. This is helpful for clients that cannot clean
|
||||
// up after themselves. This is a draft RFC and more information can be found at
|
||||
// http://files.dns-sd.org/draft-sekar-dns-ul.txt
|
||||
//
|
||||
// o := new(dns.OPT)
|
||||
// o.Hdr.Name = "."
|
||||
// o.Hdr.Rrtype = dns.TypeOPT
|
||||
// e := new(dns.EDNS0_UL)
|
||||
// e.Code = dns.EDNS0UL
|
||||
// e.Lease = 120 // in seconds
|
||||
// o.Option = append(o.Option, e)
|
||||
type EDNS0_UL struct {
|
||||
Code uint16 // Always EDNS0UL
|
||||
Lease uint32
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_UL) Option() uint16 { return EDNS0UL }
|
||||
func (e *EDNS0_UL) String() string { return strconv.FormatUint(uint64(e.Lease), 10) }
|
||||
|
||||
// Copied: http://golang.org/src/pkg/net/dnsmsg.go
|
||||
func (e *EDNS0_UL) pack() ([]byte, error) {
|
||||
b := make([]byte, 4)
|
||||
binary.BigEndian.PutUint32(b, e.Lease)
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (e *EDNS0_UL) unpack(b []byte) error {
|
||||
if len(b) < 4 {
|
||||
return ErrBuf
|
||||
}
|
||||
e.Lease = binary.BigEndian.Uint32(b)
|
||||
return nil
|
||||
}
|
||||
|
||||
// EDNS0_LLQ stands for Long Lived Queries: http://tools.ietf.org/html/draft-sekar-dns-llq-01
|
||||
// Implemented for completeness, as the EDNS0 type code is assigned.
|
||||
type EDNS0_LLQ struct {
|
||||
Code uint16 // Always EDNS0LLQ
|
||||
Version uint16
|
||||
Opcode uint16
|
||||
Error uint16
|
||||
Id uint64
|
||||
LeaseLife uint32
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_LLQ) Option() uint16 { return EDNS0LLQ }
|
||||
|
||||
func (e *EDNS0_LLQ) pack() ([]byte, error) {
|
||||
b := make([]byte, 18)
|
||||
binary.BigEndian.PutUint16(b[0:], e.Version)
|
||||
binary.BigEndian.PutUint16(b[2:], e.Opcode)
|
||||
binary.BigEndian.PutUint16(b[4:], e.Error)
|
||||
binary.BigEndian.PutUint64(b[6:], e.Id)
|
||||
binary.BigEndian.PutUint32(b[14:], e.LeaseLife)
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (e *EDNS0_LLQ) unpack(b []byte) error {
|
||||
if len(b) < 18 {
|
||||
return ErrBuf
|
||||
}
|
||||
e.Version = binary.BigEndian.Uint16(b[0:])
|
||||
e.Opcode = binary.BigEndian.Uint16(b[2:])
|
||||
e.Error = binary.BigEndian.Uint16(b[4:])
|
||||
e.Id = binary.BigEndian.Uint64(b[6:])
|
||||
e.LeaseLife = binary.BigEndian.Uint32(b[14:])
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *EDNS0_LLQ) String() string {
|
||||
s := strconv.FormatUint(uint64(e.Version), 10) + " " + strconv.FormatUint(uint64(e.Opcode), 10) +
|
||||
" " + strconv.FormatUint(uint64(e.Error), 10) + " " + strconv.FormatUint(uint64(e.Id), 10) +
|
||||
" " + strconv.FormatUint(uint64(e.LeaseLife), 10)
|
||||
return s
|
||||
}
|
||||
|
||||
// EDNS0_DUA implements the EDNS0 "DNSSEC Algorithm Understood" option. See RFC 6975.
|
||||
type EDNS0_DAU struct {
|
||||
Code uint16 // Always EDNS0DAU
|
||||
AlgCode []uint8
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_DAU) Option() uint16 { return EDNS0DAU }
|
||||
func (e *EDNS0_DAU) pack() ([]byte, error) { return e.AlgCode, nil }
|
||||
func (e *EDNS0_DAU) unpack(b []byte) error { e.AlgCode = b; return nil }
|
||||
|
||||
func (e *EDNS0_DAU) String() string {
|
||||
s := ""
|
||||
for i := 0; i < len(e.AlgCode); i++ {
|
||||
if a, ok := AlgorithmToString[e.AlgCode[i]]; ok {
|
||||
s += " " + a
|
||||
} else {
|
||||
s += " " + strconv.Itoa(int(e.AlgCode[i]))
|
||||
}
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// EDNS0_DHU implements the EDNS0 "DS Hash Understood" option. See RFC 6975.
|
||||
type EDNS0_DHU struct {
|
||||
Code uint16 // Always EDNS0DHU
|
||||
AlgCode []uint8
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_DHU) Option() uint16 { return EDNS0DHU }
|
||||
func (e *EDNS0_DHU) pack() ([]byte, error) { return e.AlgCode, nil }
|
||||
func (e *EDNS0_DHU) unpack(b []byte) error { e.AlgCode = b; return nil }
|
||||
|
||||
func (e *EDNS0_DHU) String() string {
|
||||
s := ""
|
||||
for i := 0; i < len(e.AlgCode); i++ {
|
||||
if a, ok := HashToString[e.AlgCode[i]]; ok {
|
||||
s += " " + a
|
||||
} else {
|
||||
s += " " + strconv.Itoa(int(e.AlgCode[i]))
|
||||
}
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// EDNS0_N3U implements the EDNS0 "NSEC3 Hash Understood" option. See RFC 6975.
|
||||
type EDNS0_N3U struct {
|
||||
Code uint16 // Always EDNS0N3U
|
||||
AlgCode []uint8
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_N3U) Option() uint16 { return EDNS0N3U }
|
||||
func (e *EDNS0_N3U) pack() ([]byte, error) { return e.AlgCode, nil }
|
||||
func (e *EDNS0_N3U) unpack(b []byte) error { e.AlgCode = b; return nil }
|
||||
|
||||
func (e *EDNS0_N3U) String() string {
|
||||
// Re-use the hash map
|
||||
s := ""
|
||||
for i := 0; i < len(e.AlgCode); i++ {
|
||||
if a, ok := HashToString[e.AlgCode[i]]; ok {
|
||||
s += " " + a
|
||||
} else {
|
||||
s += " " + strconv.Itoa(int(e.AlgCode[i]))
|
||||
}
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// EDNS0_EXPIRE implementes the EDNS0 option as described in RFC 7314.
|
||||
type EDNS0_EXPIRE struct {
|
||||
Code uint16 // Always EDNS0EXPIRE
|
||||
Expire uint32
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_EXPIRE) Option() uint16 { return EDNS0EXPIRE }
|
||||
func (e *EDNS0_EXPIRE) String() string { return strconv.FormatUint(uint64(e.Expire), 10) }
|
||||
|
||||
func (e *EDNS0_EXPIRE) pack() ([]byte, error) {
|
||||
b := make([]byte, 4)
|
||||
b[0] = byte(e.Expire >> 24)
|
||||
b[1] = byte(e.Expire >> 16)
|
||||
b[2] = byte(e.Expire >> 8)
|
||||
b[3] = byte(e.Expire)
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (e *EDNS0_EXPIRE) unpack(b []byte) error {
|
||||
if len(b) < 4 {
|
||||
return ErrBuf
|
||||
}
|
||||
e.Expire = binary.BigEndian.Uint32(b)
|
||||
return nil
|
||||
}
|
||||
|
||||
// The EDNS0_LOCAL option is used for local/experimental purposes. The option
|
||||
// code is recommended to be within the range [EDNS0LOCALSTART, EDNS0LOCALEND]
|
||||
// (RFC6891), although any unassigned code can actually be used. The content of
|
||||
// the option is made available in Data, unaltered.
|
||||
// Basic use pattern for creating a local option:
|
||||
//
|
||||
// o := new(dns.OPT)
|
||||
// o.Hdr.Name = "."
|
||||
// o.Hdr.Rrtype = dns.TypeOPT
|
||||
// e := new(dns.EDNS0_LOCAL)
|
||||
// e.Code = dns.EDNS0LOCALSTART
|
||||
// e.Data = []byte{72, 82, 74}
|
||||
// o.Option = append(o.Option, e)
|
||||
type EDNS0_LOCAL struct {
|
||||
Code uint16
|
||||
Data []byte
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_LOCAL) Option() uint16 { return e.Code }
|
||||
func (e *EDNS0_LOCAL) String() string {
|
||||
return strconv.FormatInt(int64(e.Code), 10) + ":0x" + hex.EncodeToString(e.Data)
|
||||
}
|
||||
|
||||
func (e *EDNS0_LOCAL) pack() ([]byte, error) {
|
||||
b := make([]byte, len(e.Data))
|
||||
copied := copy(b, e.Data)
|
||||
if copied != len(e.Data) {
|
||||
return nil, ErrBuf
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (e *EDNS0_LOCAL) unpack(b []byte) error {
|
||||
e.Data = make([]byte, len(b))
|
||||
copied := copy(e.Data, b)
|
||||
if copied != len(b) {
|
||||
return ErrBuf
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// EDNS0_TCP_KEEPALIVE is an EDNS0 option that instructs the server to keep
|
||||
// the TCP connection alive. See RFC 7828.
|
||||
type EDNS0_TCP_KEEPALIVE struct {
|
||||
Code uint16 // Always EDNSTCPKEEPALIVE
|
||||
Length uint16 // the value 0 if the TIMEOUT is omitted, the value 2 if it is present;
|
||||
Timeout uint16 // an idle timeout value for the TCP connection, specified in units of 100 milliseconds, encoded in network byte order.
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_TCP_KEEPALIVE) Option() uint16 { return EDNS0TCPKEEPALIVE }
|
||||
|
||||
func (e *EDNS0_TCP_KEEPALIVE) pack() ([]byte, error) {
|
||||
if e.Timeout != 0 && e.Length != 2 {
|
||||
return nil, errors.New("dns: timeout specified but length is not 2")
|
||||
}
|
||||
if e.Timeout == 0 && e.Length != 0 {
|
||||
return nil, errors.New("dns: timeout not specified but length is not 0")
|
||||
}
|
||||
b := make([]byte, 4+e.Length)
|
||||
binary.BigEndian.PutUint16(b[0:], e.Code)
|
||||
binary.BigEndian.PutUint16(b[2:], e.Length)
|
||||
if e.Length == 2 {
|
||||
binary.BigEndian.PutUint16(b[4:], e.Timeout)
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (e *EDNS0_TCP_KEEPALIVE) unpack(b []byte) error {
|
||||
if len(b) < 4 {
|
||||
return ErrBuf
|
||||
}
|
||||
e.Length = binary.BigEndian.Uint16(b[2:4])
|
||||
if e.Length != 0 && e.Length != 2 {
|
||||
return errors.New("dns: length mismatch, want 0/2 but got " + strconv.FormatUint(uint64(e.Length), 10))
|
||||
}
|
||||
if e.Length == 2 {
|
||||
if len(b) < 6 {
|
||||
return ErrBuf
|
||||
}
|
||||
e.Timeout = binary.BigEndian.Uint16(b[4:6])
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *EDNS0_TCP_KEEPALIVE) String() (s string) {
|
||||
s = "use tcp keep-alive"
|
||||
if e.Length == 0 {
|
||||
s += ", timeout omitted"
|
||||
} else {
|
||||
s += fmt.Sprintf(", timeout %dms", e.Timeout*100)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// EDNS0_PADDING option is used to add padding to a request/response. The default
|
||||
// value of padding SHOULD be 0x0 but other values MAY be used, for instance if
|
||||
// compression is applied before encryption which may break signatures.
|
||||
type EDNS0_PADDING struct {
|
||||
Padding []byte
|
||||
}
|
||||
|
||||
// Option implements the EDNS0 interface.
|
||||
func (e *EDNS0_PADDING) Option() uint16 { return EDNS0PADDING }
|
||||
func (e *EDNS0_PADDING) pack() ([]byte, error) { return e.Padding, nil }
|
||||
func (e *EDNS0_PADDING) unpack(b []byte) error { e.Padding = b; return nil }
|
||||
func (e *EDNS0_PADDING) String() string { return fmt.Sprintf("%0X", e.Padding) }
|
||||
87
vendor/github.com/miekg/dns/format.go
generated
vendored
Normal file
87
vendor/github.com/miekg/dns/format.go
generated
vendored
Normal file
@ -0,0 +1,87 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"net"
|
||||
"reflect"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// NumField returns the number of rdata fields r has.
|
||||
func NumField(r RR) int {
|
||||
return reflect.ValueOf(r).Elem().NumField() - 1 // Remove RR_Header
|
||||
}
|
||||
|
||||
// Field returns the rdata field i as a string. Fields are indexed starting from 1.
|
||||
// RR types that holds slice data, for instance the NSEC type bitmap will return a single
|
||||
// string where the types are concatenated using a space.
|
||||
// Accessing non existing fields will cause a panic.
|
||||
func Field(r RR, i int) string {
|
||||
if i == 0 {
|
||||
return ""
|
||||
}
|
||||
d := reflect.ValueOf(r).Elem().Field(i)
|
||||
switch k := d.Kind(); k {
|
||||
case reflect.String:
|
||||
return d.String()
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return strconv.FormatInt(d.Int(), 10)
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
return strconv.FormatUint(d.Uint(), 10)
|
||||
case reflect.Slice:
|
||||
switch reflect.ValueOf(r).Elem().Type().Field(i).Tag {
|
||||
case `dns:"a"`:
|
||||
// TODO(miek): Hmm store this as 16 bytes
|
||||
if d.Len() < net.IPv6len {
|
||||
return net.IPv4(byte(d.Index(0).Uint()),
|
||||
byte(d.Index(1).Uint()),
|
||||
byte(d.Index(2).Uint()),
|
||||
byte(d.Index(3).Uint())).String()
|
||||
}
|
||||
return net.IPv4(byte(d.Index(12).Uint()),
|
||||
byte(d.Index(13).Uint()),
|
||||
byte(d.Index(14).Uint()),
|
||||
byte(d.Index(15).Uint())).String()
|
||||
case `dns:"aaaa"`:
|
||||
return net.IP{
|
||||
byte(d.Index(0).Uint()),
|
||||
byte(d.Index(1).Uint()),
|
||||
byte(d.Index(2).Uint()),
|
||||
byte(d.Index(3).Uint()),
|
||||
byte(d.Index(4).Uint()),
|
||||
byte(d.Index(5).Uint()),
|
||||
byte(d.Index(6).Uint()),
|
||||
byte(d.Index(7).Uint()),
|
||||
byte(d.Index(8).Uint()),
|
||||
byte(d.Index(9).Uint()),
|
||||
byte(d.Index(10).Uint()),
|
||||
byte(d.Index(11).Uint()),
|
||||
byte(d.Index(12).Uint()),
|
||||
byte(d.Index(13).Uint()),
|
||||
byte(d.Index(14).Uint()),
|
||||
byte(d.Index(15).Uint()),
|
||||
}.String()
|
||||
case `dns:"nsec"`:
|
||||
if d.Len() == 0 {
|
||||
return ""
|
||||
}
|
||||
s := Type(d.Index(0).Uint()).String()
|
||||
for i := 1; i < d.Len(); i++ {
|
||||
s += " " + Type(d.Index(i).Uint()).String()
|
||||
}
|
||||
return s
|
||||
default:
|
||||
// if it does not have a tag its a string slice
|
||||
fallthrough
|
||||
case `dns:"txt"`:
|
||||
if d.Len() == 0 {
|
||||
return ""
|
||||
}
|
||||
s := d.Index(0).String()
|
||||
for i := 1; i < d.Len(); i++ {
|
||||
s += " " + d.Index(i).String()
|
||||
}
|
||||
return s
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
23
vendor/github.com/miekg/dns/fuzz.go
generated
vendored
Normal file
23
vendor/github.com/miekg/dns/fuzz.go
generated
vendored
Normal file
@ -0,0 +1,23 @@
|
||||
// +build fuzz
|
||||
|
||||
package dns
|
||||
|
||||
func Fuzz(data []byte) int {
|
||||
msg := new(Msg)
|
||||
|
||||
if err := msg.Unpack(data); err != nil {
|
||||
return 0
|
||||
}
|
||||
if _, err := msg.Pack(); err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
func FuzzNewRR(data []byte) int {
|
||||
if _, err := NewRR(string(data)); err != nil {
|
||||
return 0
|
||||
}
|
||||
return 1
|
||||
}
|
||||
159
vendor/github.com/miekg/dns/generate.go
generated
vendored
Normal file
159
vendor/github.com/miekg/dns/generate.go
generated
vendored
Normal file
@ -0,0 +1,159 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Parse the $GENERATE statement as used in BIND9 zones.
|
||||
// See http://www.zytrax.com/books/dns/ch8/generate.html for instance.
|
||||
// We are called after '$GENERATE '. After which we expect:
|
||||
// * the range (12-24/2)
|
||||
// * lhs (ownername)
|
||||
// * [[ttl][class]]
|
||||
// * type
|
||||
// * rhs (rdata)
|
||||
// But we are lazy here, only the range is parsed *all* occurrences
|
||||
// of $ after that are interpreted.
|
||||
// Any error are returned as a string value, the empty string signals
|
||||
// "no error".
|
||||
func generate(l lex, c chan lex, t chan *Token, o string) string {
|
||||
step := 1
|
||||
if i := strings.IndexAny(l.token, "/"); i != -1 {
|
||||
if i+1 == len(l.token) {
|
||||
return "bad step in $GENERATE range"
|
||||
}
|
||||
if s, err := strconv.Atoi(l.token[i+1:]); err == nil {
|
||||
if s < 0 {
|
||||
return "bad step in $GENERATE range"
|
||||
}
|
||||
step = s
|
||||
} else {
|
||||
return "bad step in $GENERATE range"
|
||||
}
|
||||
l.token = l.token[:i]
|
||||
}
|
||||
sx := strings.SplitN(l.token, "-", 2)
|
||||
if len(sx) != 2 {
|
||||
return "bad start-stop in $GENERATE range"
|
||||
}
|
||||
start, err := strconv.Atoi(sx[0])
|
||||
if err != nil {
|
||||
return "bad start in $GENERATE range"
|
||||
}
|
||||
end, err := strconv.Atoi(sx[1])
|
||||
if err != nil {
|
||||
return "bad stop in $GENERATE range"
|
||||
}
|
||||
if end < 0 || start < 0 || end < start {
|
||||
return "bad range in $GENERATE range"
|
||||
}
|
||||
|
||||
<-c // _BLANK
|
||||
// Create a complete new string, which we then parse again.
|
||||
s := ""
|
||||
BuildRR:
|
||||
l = <-c
|
||||
if l.value != zNewline && l.value != zEOF {
|
||||
s += l.token
|
||||
goto BuildRR
|
||||
}
|
||||
for i := start; i <= end; i += step {
|
||||
var (
|
||||
escape bool
|
||||
dom bytes.Buffer
|
||||
mod string
|
||||
err error
|
||||
offset int
|
||||
)
|
||||
|
||||
for j := 0; j < len(s); j++ { // No 'range' because we need to jump around
|
||||
switch s[j] {
|
||||
case '\\':
|
||||
if escape {
|
||||
dom.WriteByte('\\')
|
||||
escape = false
|
||||
continue
|
||||
}
|
||||
escape = true
|
||||
case '$':
|
||||
mod = "%d"
|
||||
offset = 0
|
||||
if escape {
|
||||
dom.WriteByte('$')
|
||||
escape = false
|
||||
continue
|
||||
}
|
||||
escape = false
|
||||
if j+1 >= len(s) { // End of the string
|
||||
dom.WriteString(fmt.Sprintf(mod, i+offset))
|
||||
continue
|
||||
} else {
|
||||
if s[j+1] == '$' {
|
||||
dom.WriteByte('$')
|
||||
j++
|
||||
continue
|
||||
}
|
||||
}
|
||||
// Search for { and }
|
||||
if s[j+1] == '{' { // Modifier block
|
||||
sep := strings.Index(s[j+2:], "}")
|
||||
if sep == -1 {
|
||||
return "bad modifier in $GENERATE"
|
||||
}
|
||||
mod, offset, err = modToPrintf(s[j+2 : j+2+sep])
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
j += 2 + sep // Jump to it
|
||||
}
|
||||
dom.WriteString(fmt.Sprintf(mod, i+offset))
|
||||
default:
|
||||
if escape { // Pretty useless here
|
||||
escape = false
|
||||
continue
|
||||
}
|
||||
dom.WriteByte(s[j])
|
||||
}
|
||||
}
|
||||
// Re-parse the RR and send it on the current channel t
|
||||
rx, err := NewRR("$ORIGIN " + o + "\n" + dom.String())
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
t <- &Token{RR: rx}
|
||||
// Its more efficient to first built the rrlist and then parse it in
|
||||
// one go! But is this a problem?
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Convert a $GENERATE modifier 0,0,d to something Printf can deal with.
|
||||
func modToPrintf(s string) (string, int, error) {
|
||||
xs := strings.SplitN(s, ",", 3)
|
||||
if len(xs) != 3 {
|
||||
return "", 0, errors.New("bad modifier in $GENERATE")
|
||||
}
|
||||
// xs[0] is offset, xs[1] is width, xs[2] is base
|
||||
if xs[2] != "o" && xs[2] != "d" && xs[2] != "x" && xs[2] != "X" {
|
||||
return "", 0, errors.New("bad base in $GENERATE")
|
||||
}
|
||||
offset, err := strconv.Atoi(xs[0])
|
||||
if err != nil || offset > 255 {
|
||||
return "", 0, errors.New("bad offset in $GENERATE")
|
||||
}
|
||||
width, err := strconv.Atoi(xs[1])
|
||||
if err != nil || width > 255 {
|
||||
return "", offset, errors.New("bad width in $GENERATE")
|
||||
}
|
||||
switch {
|
||||
case width < 0:
|
||||
return "", offset, errors.New("bad width in $GENERATE")
|
||||
case width == 0:
|
||||
return "%" + xs[1] + xs[2], offset, nil
|
||||
}
|
||||
return "%0" + xs[1] + xs[2], offset, nil
|
||||
}
|
||||
191
vendor/github.com/miekg/dns/labels.go
generated
vendored
Normal file
191
vendor/github.com/miekg/dns/labels.go
generated
vendored
Normal file
@ -0,0 +1,191 @@
|
||||
package dns
|
||||
|
||||
// Holds a bunch of helper functions for dealing with labels.
|
||||
|
||||
// SplitDomainName splits a name string into it's labels.
|
||||
// www.miek.nl. returns []string{"www", "miek", "nl"}
|
||||
// .www.miek.nl. returns []string{"", "www", "miek", "nl"},
|
||||
// The root label (.) returns nil. Note that using
|
||||
// strings.Split(s) will work in most cases, but does not handle
|
||||
// escaped dots (\.) for instance.
|
||||
// s must be a syntactically valid domain name, see IsDomainName.
|
||||
func SplitDomainName(s string) (labels []string) {
|
||||
if len(s) == 0 {
|
||||
return nil
|
||||
}
|
||||
fqdnEnd := 0 // offset of the final '.' or the length of the name
|
||||
idx := Split(s)
|
||||
begin := 0
|
||||
if s[len(s)-1] == '.' {
|
||||
fqdnEnd = len(s) - 1
|
||||
} else {
|
||||
fqdnEnd = len(s)
|
||||
}
|
||||
|
||||
switch len(idx) {
|
||||
case 0:
|
||||
return nil
|
||||
case 1:
|
||||
// no-op
|
||||
default:
|
||||
end := 0
|
||||
for i := 1; i < len(idx); i++ {
|
||||
end = idx[i]
|
||||
labels = append(labels, s[begin:end-1])
|
||||
begin = end
|
||||
}
|
||||
}
|
||||
|
||||
labels = append(labels, s[begin:fqdnEnd])
|
||||
return labels
|
||||
}
|
||||
|
||||
// CompareDomainName compares the names s1 and s2 and
|
||||
// returns how many labels they have in common starting from the *right*.
|
||||
// The comparison stops at the first inequality. The names are downcased
|
||||
// before the comparison.
|
||||
//
|
||||
// www.miek.nl. and miek.nl. have two labels in common: miek and nl
|
||||
// www.miek.nl. and www.bla.nl. have one label in common: nl
|
||||
//
|
||||
// s1 and s2 must be syntactically valid domain names.
|
||||
func CompareDomainName(s1, s2 string) (n int) {
|
||||
// the first check: root label
|
||||
if s1 == "." || s2 == "." {
|
||||
return 0
|
||||
}
|
||||
|
||||
l1 := Split(s1)
|
||||
l2 := Split(s2)
|
||||
|
||||
j1 := len(l1) - 1 // end
|
||||
i1 := len(l1) - 2 // start
|
||||
j2 := len(l2) - 1
|
||||
i2 := len(l2) - 2
|
||||
// the second check can be done here: last/only label
|
||||
// before we fall through into the for-loop below
|
||||
if equal(s1[l1[j1]:], s2[l2[j2]:]) {
|
||||
n++
|
||||
} else {
|
||||
return
|
||||
}
|
||||
for {
|
||||
if i1 < 0 || i2 < 0 {
|
||||
break
|
||||
}
|
||||
if equal(s1[l1[i1]:l1[j1]], s2[l2[i2]:l2[j2]]) {
|
||||
n++
|
||||
} else {
|
||||
break
|
||||
}
|
||||
j1--
|
||||
i1--
|
||||
j2--
|
||||
i2--
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// CountLabel counts the the number of labels in the string s.
|
||||
// s must be a syntactically valid domain name.
|
||||
func CountLabel(s string) (labels int) {
|
||||
if s == "." {
|
||||
return
|
||||
}
|
||||
off := 0
|
||||
end := false
|
||||
for {
|
||||
off, end = NextLabel(s, off)
|
||||
labels++
|
||||
if end {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Split splits a name s into its label indexes.
|
||||
// www.miek.nl. returns []int{0, 4, 9}, www.miek.nl also returns []int{0, 4, 9}.
|
||||
// The root name (.) returns nil. Also see SplitDomainName.
|
||||
// s must be a syntactically valid domain name.
|
||||
func Split(s string) []int {
|
||||
if s == "." {
|
||||
return nil
|
||||
}
|
||||
idx := make([]int, 1, 3)
|
||||
off := 0
|
||||
end := false
|
||||
|
||||
for {
|
||||
off, end = NextLabel(s, off)
|
||||
if end {
|
||||
return idx
|
||||
}
|
||||
idx = append(idx, off)
|
||||
}
|
||||
}
|
||||
|
||||
// NextLabel returns the index of the start of the next label in the
|
||||
// string s starting at offset.
|
||||
// The bool end is true when the end of the string has been reached.
|
||||
// Also see PrevLabel.
|
||||
func NextLabel(s string, offset int) (i int, end bool) {
|
||||
quote := false
|
||||
for i = offset; i < len(s)-1; i++ {
|
||||
switch s[i] {
|
||||
case '\\':
|
||||
quote = !quote
|
||||
default:
|
||||
quote = false
|
||||
case '.':
|
||||
if quote {
|
||||
quote = !quote
|
||||
continue
|
||||
}
|
||||
return i + 1, false
|
||||
}
|
||||
}
|
||||
return i + 1, true
|
||||
}
|
||||
|
||||
// PrevLabel returns the index of the label when starting from the right and
|
||||
// jumping n labels to the left.
|
||||
// The bool start is true when the start of the string has been overshot.
|
||||
// Also see NextLabel.
|
||||
func PrevLabel(s string, n int) (i int, start bool) {
|
||||
if n == 0 {
|
||||
return len(s), false
|
||||
}
|
||||
lab := Split(s)
|
||||
if lab == nil {
|
||||
return 0, true
|
||||
}
|
||||
if n > len(lab) {
|
||||
return 0, true
|
||||
}
|
||||
return lab[len(lab)-n], false
|
||||
}
|
||||
|
||||
// equal compares a and b while ignoring case. It returns true when equal otherwise false.
|
||||
func equal(a, b string) bool {
|
||||
// might be lifted into API function.
|
||||
la := len(a)
|
||||
lb := len(b)
|
||||
if la != lb {
|
||||
return false
|
||||
}
|
||||
|
||||
for i := la - 1; i >= 0; i-- {
|
||||
ai := a[i]
|
||||
bi := b[i]
|
||||
if ai >= 'A' && ai <= 'Z' {
|
||||
ai |= ('a' - 'A')
|
||||
}
|
||||
if bi >= 'A' && bi <= 'Z' {
|
||||
bi |= ('a' - 'A')
|
||||
}
|
||||
if ai != bi {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
1154
vendor/github.com/miekg/dns/msg.go
generated
vendored
Normal file
1154
vendor/github.com/miekg/dns/msg.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
348
vendor/github.com/miekg/dns/msg_generate.go
generated
vendored
Normal file
348
vendor/github.com/miekg/dns/msg_generate.go
generated
vendored
Normal file
@ -0,0 +1,348 @@
|
||||
//+build ignore
|
||||
|
||||
// msg_generate.go is meant to run with go generate. It will use
|
||||
// go/{importer,types} to track down all the RR struct types. Then for each type
|
||||
// it will generate pack/unpack methods based on the struct tags. The generated source is
|
||||
// written to zmsg.go, and is meant to be checked into git.
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"go/format"
|
||||
"go/importer"
|
||||
"go/types"
|
||||
"log"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var packageHdr = `
|
||||
// Code generated by "go run msg_generate.go"; DO NOT EDIT.
|
||||
|
||||
package dns
|
||||
|
||||
`
|
||||
|
||||
// getTypeStruct will take a type and the package scope, and return the
|
||||
// (innermost) struct if the type is considered a RR type (currently defined as
|
||||
// those structs beginning with a RR_Header, could be redefined as implementing
|
||||
// the RR interface). The bool return value indicates if embedded structs were
|
||||
// resolved.
|
||||
func getTypeStruct(t types.Type, scope *types.Scope) (*types.Struct, bool) {
|
||||
st, ok := t.Underlying().(*types.Struct)
|
||||
if !ok {
|
||||
return nil, false
|
||||
}
|
||||
if st.Field(0).Type() == scope.Lookup("RR_Header").Type() {
|
||||
return st, false
|
||||
}
|
||||
if st.Field(0).Anonymous() {
|
||||
st, _ := getTypeStruct(st.Field(0).Type(), scope)
|
||||
return st, true
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func main() {
|
||||
// Import and type-check the package
|
||||
pkg, err := importer.Default().Import("github.com/miekg/dns")
|
||||
fatalIfErr(err)
|
||||
scope := pkg.Scope()
|
||||
|
||||
// Collect actual types (*X)
|
||||
var namedTypes []string
|
||||
for _, name := range scope.Names() {
|
||||
o := scope.Lookup(name)
|
||||
if o == nil || !o.Exported() {
|
||||
continue
|
||||
}
|
||||
if st, _ := getTypeStruct(o.Type(), scope); st == nil {
|
||||
continue
|
||||
}
|
||||
if name == "PrivateRR" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if corresponding TypeX exists
|
||||
if scope.Lookup("Type"+o.Name()) == nil && o.Name() != "RFC3597" {
|
||||
log.Fatalf("Constant Type%s does not exist.", o.Name())
|
||||
}
|
||||
|
||||
namedTypes = append(namedTypes, o.Name())
|
||||
}
|
||||
|
||||
b := &bytes.Buffer{}
|
||||
b.WriteString(packageHdr)
|
||||
|
||||
fmt.Fprint(b, "// pack*() functions\n\n")
|
||||
for _, name := range namedTypes {
|
||||
o := scope.Lookup(name)
|
||||
st, _ := getTypeStruct(o.Type(), scope)
|
||||
|
||||
fmt.Fprintf(b, "func (rr *%s) pack(msg []byte, off int, compression map[string]int, compress bool) (int, error) {\n", name)
|
||||
fmt.Fprint(b, `off, err := rr.Hdr.pack(msg, off, compression, compress)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
headerEnd := off
|
||||
`)
|
||||
for i := 1; i < st.NumFields(); i++ {
|
||||
o := func(s string) {
|
||||
fmt.Fprintf(b, s, st.Field(i).Name())
|
||||
fmt.Fprint(b, `if err != nil {
|
||||
return off, err
|
||||
}
|
||||
`)
|
||||
}
|
||||
|
||||
if _, ok := st.Field(i).Type().(*types.Slice); ok {
|
||||
switch st.Tag(i) {
|
||||
case `dns:"-"`: // ignored
|
||||
case `dns:"txt"`:
|
||||
o("off, err = packStringTxt(rr.%s, msg, off)\n")
|
||||
case `dns:"opt"`:
|
||||
o("off, err = packDataOpt(rr.%s, msg, off)\n")
|
||||
case `dns:"nsec"`:
|
||||
o("off, err = packDataNsec(rr.%s, msg, off)\n")
|
||||
case `dns:"domain-name"`:
|
||||
o("off, err = packDataDomainNames(rr.%s, msg, off, compression, compress)\n")
|
||||
default:
|
||||
log.Fatalln(name, st.Field(i).Name(), st.Tag(i))
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
switch {
|
||||
case st.Tag(i) == `dns:"-"`: // ignored
|
||||
case st.Tag(i) == `dns:"cdomain-name"`:
|
||||
o("off, err = PackDomainName(rr.%s, msg, off, compression, compress)\n")
|
||||
case st.Tag(i) == `dns:"domain-name"`:
|
||||
o("off, err = PackDomainName(rr.%s, msg, off, compression, false)\n")
|
||||
case st.Tag(i) == `dns:"a"`:
|
||||
o("off, err = packDataA(rr.%s, msg, off)\n")
|
||||
case st.Tag(i) == `dns:"aaaa"`:
|
||||
o("off, err = packDataAAAA(rr.%s, msg, off)\n")
|
||||
case st.Tag(i) == `dns:"uint48"`:
|
||||
o("off, err = packUint48(rr.%s, msg, off)\n")
|
||||
case st.Tag(i) == `dns:"txt"`:
|
||||
o("off, err = packString(rr.%s, msg, off)\n")
|
||||
|
||||
case strings.HasPrefix(st.Tag(i), `dns:"size-base32`): // size-base32 can be packed just like base32
|
||||
fallthrough
|
||||
case st.Tag(i) == `dns:"base32"`:
|
||||
o("off, err = packStringBase32(rr.%s, msg, off)\n")
|
||||
|
||||
case strings.HasPrefix(st.Tag(i), `dns:"size-base64`): // size-base64 can be packed just like base64
|
||||
fallthrough
|
||||
case st.Tag(i) == `dns:"base64"`:
|
||||
o("off, err = packStringBase64(rr.%s, msg, off)\n")
|
||||
|
||||
case strings.HasPrefix(st.Tag(i), `dns:"size-hex:SaltLength`):
|
||||
// directly write instead of using o() so we get the error check in the correct place
|
||||
field := st.Field(i).Name()
|
||||
fmt.Fprintf(b, `// Only pack salt if value is not "-", i.e. empty
|
||||
if rr.%s != "-" {
|
||||
off, err = packStringHex(rr.%s, msg, off)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
}
|
||||
`, field, field)
|
||||
continue
|
||||
case strings.HasPrefix(st.Tag(i), `dns:"size-hex`): // size-hex can be packed just like hex
|
||||
fallthrough
|
||||
case st.Tag(i) == `dns:"hex"`:
|
||||
o("off, err = packStringHex(rr.%s, msg, off)\n")
|
||||
|
||||
case st.Tag(i) == `dns:"octet"`:
|
||||
o("off, err = packStringOctet(rr.%s, msg, off)\n")
|
||||
case st.Tag(i) == "":
|
||||
switch st.Field(i).Type().(*types.Basic).Kind() {
|
||||
case types.Uint8:
|
||||
o("off, err = packUint8(rr.%s, msg, off)\n")
|
||||
case types.Uint16:
|
||||
o("off, err = packUint16(rr.%s, msg, off)\n")
|
||||
case types.Uint32:
|
||||
o("off, err = packUint32(rr.%s, msg, off)\n")
|
||||
case types.Uint64:
|
||||
o("off, err = packUint64(rr.%s, msg, off)\n")
|
||||
case types.String:
|
||||
o("off, err = packString(rr.%s, msg, off)\n")
|
||||
default:
|
||||
log.Fatalln(name, st.Field(i).Name())
|
||||
}
|
||||
default:
|
||||
log.Fatalln(name, st.Field(i).Name(), st.Tag(i))
|
||||
}
|
||||
}
|
||||
// We have packed everything, only now we know the rdlength of this RR
|
||||
fmt.Fprintln(b, "rr.Header().Rdlength = uint16(off-headerEnd)")
|
||||
fmt.Fprintln(b, "return off, nil }\n")
|
||||
}
|
||||
|
||||
fmt.Fprint(b, "// unpack*() functions\n\n")
|
||||
for _, name := range namedTypes {
|
||||
o := scope.Lookup(name)
|
||||
st, _ := getTypeStruct(o.Type(), scope)
|
||||
|
||||
fmt.Fprintf(b, "func unpack%s(h RR_Header, msg []byte, off int) (RR, int, error) {\n", name)
|
||||
fmt.Fprintf(b, "rr := new(%s)\n", name)
|
||||
fmt.Fprint(b, "rr.Hdr = h\n")
|
||||
fmt.Fprint(b, `if noRdata(h) {
|
||||
return rr, off, nil
|
||||
}
|
||||
var err error
|
||||
rdStart := off
|
||||
_ = rdStart
|
||||
|
||||
`)
|
||||
for i := 1; i < st.NumFields(); i++ {
|
||||
o := func(s string) {
|
||||
fmt.Fprintf(b, s, st.Field(i).Name())
|
||||
fmt.Fprint(b, `if err != nil {
|
||||
return rr, off, err
|
||||
}
|
||||
`)
|
||||
}
|
||||
|
||||
// size-* are special, because they reference a struct member we should use for the length.
|
||||
if strings.HasPrefix(st.Tag(i), `dns:"size-`) {
|
||||
structMember := structMember(st.Tag(i))
|
||||
structTag := structTag(st.Tag(i))
|
||||
switch structTag {
|
||||
case "hex":
|
||||
fmt.Fprintf(b, "rr.%s, off, err = unpackStringHex(msg, off, off + int(rr.%s))\n", st.Field(i).Name(), structMember)
|
||||
case "base32":
|
||||
fmt.Fprintf(b, "rr.%s, off, err = unpackStringBase32(msg, off, off + int(rr.%s))\n", st.Field(i).Name(), structMember)
|
||||
case "base64":
|
||||
fmt.Fprintf(b, "rr.%s, off, err = unpackStringBase64(msg, off, off + int(rr.%s))\n", st.Field(i).Name(), structMember)
|
||||
default:
|
||||
log.Fatalln(name, st.Field(i).Name(), st.Tag(i))
|
||||
}
|
||||
fmt.Fprint(b, `if err != nil {
|
||||
return rr, off, err
|
||||
}
|
||||
`)
|
||||
continue
|
||||
}
|
||||
|
||||
if _, ok := st.Field(i).Type().(*types.Slice); ok {
|
||||
switch st.Tag(i) {
|
||||
case `dns:"-"`: // ignored
|
||||
case `dns:"txt"`:
|
||||
o("rr.%s, off, err = unpackStringTxt(msg, off)\n")
|
||||
case `dns:"opt"`:
|
||||
o("rr.%s, off, err = unpackDataOpt(msg, off)\n")
|
||||
case `dns:"nsec"`:
|
||||
o("rr.%s, off, err = unpackDataNsec(msg, off)\n")
|
||||
case `dns:"domain-name"`:
|
||||
o("rr.%s, off, err = unpackDataDomainNames(msg, off, rdStart + int(rr.Hdr.Rdlength))\n")
|
||||
default:
|
||||
log.Fatalln(name, st.Field(i).Name(), st.Tag(i))
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
switch st.Tag(i) {
|
||||
case `dns:"-"`: // ignored
|
||||
case `dns:"cdomain-name"`:
|
||||
fallthrough
|
||||
case `dns:"domain-name"`:
|
||||
o("rr.%s, off, err = UnpackDomainName(msg, off)\n")
|
||||
case `dns:"a"`:
|
||||
o("rr.%s, off, err = unpackDataA(msg, off)\n")
|
||||
case `dns:"aaaa"`:
|
||||
o("rr.%s, off, err = unpackDataAAAA(msg, off)\n")
|
||||
case `dns:"uint48"`:
|
||||
o("rr.%s, off, err = unpackUint48(msg, off)\n")
|
||||
case `dns:"txt"`:
|
||||
o("rr.%s, off, err = unpackString(msg, off)\n")
|
||||
case `dns:"base32"`:
|
||||
o("rr.%s, off, err = unpackStringBase32(msg, off, rdStart + int(rr.Hdr.Rdlength))\n")
|
||||
case `dns:"base64"`:
|
||||
o("rr.%s, off, err = unpackStringBase64(msg, off, rdStart + int(rr.Hdr.Rdlength))\n")
|
||||
case `dns:"hex"`:
|
||||
o("rr.%s, off, err = unpackStringHex(msg, off, rdStart + int(rr.Hdr.Rdlength))\n")
|
||||
case `dns:"octet"`:
|
||||
o("rr.%s, off, err = unpackStringOctet(msg, off)\n")
|
||||
case "":
|
||||
switch st.Field(i).Type().(*types.Basic).Kind() {
|
||||
case types.Uint8:
|
||||
o("rr.%s, off, err = unpackUint8(msg, off)\n")
|
||||
case types.Uint16:
|
||||
o("rr.%s, off, err = unpackUint16(msg, off)\n")
|
||||
case types.Uint32:
|
||||
o("rr.%s, off, err = unpackUint32(msg, off)\n")
|
||||
case types.Uint64:
|
||||
o("rr.%s, off, err = unpackUint64(msg, off)\n")
|
||||
case types.String:
|
||||
o("rr.%s, off, err = unpackString(msg, off)\n")
|
||||
default:
|
||||
log.Fatalln(name, st.Field(i).Name())
|
||||
}
|
||||
default:
|
||||
log.Fatalln(name, st.Field(i).Name(), st.Tag(i))
|
||||
}
|
||||
// If we've hit len(msg) we return without error.
|
||||
if i < st.NumFields()-1 {
|
||||
fmt.Fprintf(b, `if off == len(msg) {
|
||||
return rr, off, nil
|
||||
}
|
||||
`)
|
||||
}
|
||||
}
|
||||
fmt.Fprintf(b, "return rr, off, err }\n\n")
|
||||
}
|
||||
// Generate typeToUnpack map
|
||||
fmt.Fprintln(b, "var typeToUnpack = map[uint16]func(RR_Header, []byte, int) (RR, int, error){")
|
||||
for _, name := range namedTypes {
|
||||
if name == "RFC3597" {
|
||||
continue
|
||||
}
|
||||
fmt.Fprintf(b, "Type%s: unpack%s,\n", name, name)
|
||||
}
|
||||
fmt.Fprintln(b, "}\n")
|
||||
|
||||
// gofmt
|
||||
res, err := format.Source(b.Bytes())
|
||||
if err != nil {
|
||||
b.WriteTo(os.Stderr)
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// write result
|
||||
f, err := os.Create("zmsg.go")
|
||||
fatalIfErr(err)
|
||||
defer f.Close()
|
||||
f.Write(res)
|
||||
}
|
||||
|
||||
// structMember will take a tag like dns:"size-base32:SaltLength" and return the last part of this string.
|
||||
func structMember(s string) string {
|
||||
fields := strings.Split(s, ":")
|
||||
if len(fields) == 0 {
|
||||
return ""
|
||||
}
|
||||
f := fields[len(fields)-1]
|
||||
// f should have a closing "
|
||||
if len(f) > 1 {
|
||||
return f[:len(f)-1]
|
||||
}
|
||||
return f
|
||||
}
|
||||
|
||||
// structTag will take a tag like dns:"size-base32:SaltLength" and return base32.
|
||||
func structTag(s string) string {
|
||||
fields := strings.Split(s, ":")
|
||||
if len(fields) < 2 {
|
||||
return ""
|
||||
}
|
||||
return fields[1][len("\"size-"):]
|
||||
}
|
||||
|
||||
func fatalIfErr(err error) {
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
637
vendor/github.com/miekg/dns/msg_helpers.go
generated
vendored
Normal file
637
vendor/github.com/miekg/dns/msg_helpers.go
generated
vendored
Normal file
@ -0,0 +1,637 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"encoding/base32"
|
||||
"encoding/base64"
|
||||
"encoding/binary"
|
||||
"encoding/hex"
|
||||
"net"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// helper functions called from the generated zmsg.go
|
||||
|
||||
// These function are named after the tag to help pack/unpack, if there is no tag it is the name
|
||||
// of the type they pack/unpack (string, int, etc). We prefix all with unpackData or packData, so packDataA or
|
||||
// packDataDomainName.
|
||||
|
||||
func unpackDataA(msg []byte, off int) (net.IP, int, error) {
|
||||
if off+net.IPv4len > len(msg) {
|
||||
return nil, len(msg), &Error{err: "overflow unpacking a"}
|
||||
}
|
||||
a := append(make(net.IP, 0, net.IPv4len), msg[off:off+net.IPv4len]...)
|
||||
off += net.IPv4len
|
||||
return a, off, nil
|
||||
}
|
||||
|
||||
func packDataA(a net.IP, msg []byte, off int) (int, error) {
|
||||
// It must be a slice of 4, even if it is 16, we encode only the first 4
|
||||
if off+net.IPv4len > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing a"}
|
||||
}
|
||||
switch len(a) {
|
||||
case net.IPv4len, net.IPv6len:
|
||||
copy(msg[off:], a.To4())
|
||||
off += net.IPv4len
|
||||
case 0:
|
||||
// Allowed, for dynamic updates.
|
||||
default:
|
||||
return len(msg), &Error{err: "overflow packing a"}
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackDataAAAA(msg []byte, off int) (net.IP, int, error) {
|
||||
if off+net.IPv6len > len(msg) {
|
||||
return nil, len(msg), &Error{err: "overflow unpacking aaaa"}
|
||||
}
|
||||
aaaa := append(make(net.IP, 0, net.IPv6len), msg[off:off+net.IPv6len]...)
|
||||
off += net.IPv6len
|
||||
return aaaa, off, nil
|
||||
}
|
||||
|
||||
func packDataAAAA(aaaa net.IP, msg []byte, off int) (int, error) {
|
||||
if off+net.IPv6len > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing aaaa"}
|
||||
}
|
||||
|
||||
switch len(aaaa) {
|
||||
case net.IPv6len:
|
||||
copy(msg[off:], aaaa)
|
||||
off += net.IPv6len
|
||||
case 0:
|
||||
// Allowed, dynamic updates.
|
||||
default:
|
||||
return len(msg), &Error{err: "overflow packing aaaa"}
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
|
||||
// unpackHeader unpacks an RR header, returning the offset to the end of the header and a
|
||||
// re-sliced msg according to the expected length of the RR.
|
||||
func unpackHeader(msg []byte, off int) (rr RR_Header, off1 int, truncmsg []byte, err error) {
|
||||
hdr := RR_Header{}
|
||||
if off == len(msg) {
|
||||
return hdr, off, msg, nil
|
||||
}
|
||||
|
||||
hdr.Name, off, err = UnpackDomainName(msg, off)
|
||||
if err != nil {
|
||||
return hdr, len(msg), msg, err
|
||||
}
|
||||
hdr.Rrtype, off, err = unpackUint16(msg, off)
|
||||
if err != nil {
|
||||
return hdr, len(msg), msg, err
|
||||
}
|
||||
hdr.Class, off, err = unpackUint16(msg, off)
|
||||
if err != nil {
|
||||
return hdr, len(msg), msg, err
|
||||
}
|
||||
hdr.Ttl, off, err = unpackUint32(msg, off)
|
||||
if err != nil {
|
||||
return hdr, len(msg), msg, err
|
||||
}
|
||||
hdr.Rdlength, off, err = unpackUint16(msg, off)
|
||||
if err != nil {
|
||||
return hdr, len(msg), msg, err
|
||||
}
|
||||
msg, err = truncateMsgFromRdlength(msg, off, hdr.Rdlength)
|
||||
return hdr, off, msg, err
|
||||
}
|
||||
|
||||
// pack packs an RR header, returning the offset to the end of the header.
|
||||
// See PackDomainName for documentation about the compression.
|
||||
func (hdr RR_Header) pack(msg []byte, off int, compression map[string]int, compress bool) (off1 int, err error) {
|
||||
if off == len(msg) {
|
||||
return off, nil
|
||||
}
|
||||
|
||||
off, err = PackDomainName(hdr.Name, msg, off, compression, compress)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
off, err = packUint16(hdr.Rrtype, msg, off)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
off, err = packUint16(hdr.Class, msg, off)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
off, err = packUint32(hdr.Ttl, msg, off)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
off, err = packUint16(hdr.Rdlength, msg, off)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
|
||||
// helper helper functions.
|
||||
|
||||
// truncateMsgFromRdLength truncates msg to match the expected length of the RR.
|
||||
// Returns an error if msg is smaller than the expected size.
|
||||
func truncateMsgFromRdlength(msg []byte, off int, rdlength uint16) (truncmsg []byte, err error) {
|
||||
lenrd := off + int(rdlength)
|
||||
if lenrd > len(msg) {
|
||||
return msg, &Error{err: "overflowing header size"}
|
||||
}
|
||||
return msg[:lenrd], nil
|
||||
}
|
||||
|
||||
func fromBase32(s []byte) (buf []byte, err error) {
|
||||
for i, b := range s {
|
||||
if b >= 'a' && b <= 'z' {
|
||||
s[i] = b - 32
|
||||
}
|
||||
}
|
||||
buflen := base32.HexEncoding.DecodedLen(len(s))
|
||||
buf = make([]byte, buflen)
|
||||
n, err := base32.HexEncoding.Decode(buf, s)
|
||||
buf = buf[:n]
|
||||
return
|
||||
}
|
||||
|
||||
func toBase32(b []byte) string { return base32.HexEncoding.EncodeToString(b) }
|
||||
|
||||
func fromBase64(s []byte) (buf []byte, err error) {
|
||||
buflen := base64.StdEncoding.DecodedLen(len(s))
|
||||
buf = make([]byte, buflen)
|
||||
n, err := base64.StdEncoding.Decode(buf, s)
|
||||
buf = buf[:n]
|
||||
return
|
||||
}
|
||||
|
||||
func toBase64(b []byte) string { return base64.StdEncoding.EncodeToString(b) }
|
||||
|
||||
// dynamicUpdate returns true if the Rdlength is zero.
|
||||
func noRdata(h RR_Header) bool { return h.Rdlength == 0 }
|
||||
|
||||
func unpackUint8(msg []byte, off int) (i uint8, off1 int, err error) {
|
||||
if off+1 > len(msg) {
|
||||
return 0, len(msg), &Error{err: "overflow unpacking uint8"}
|
||||
}
|
||||
return uint8(msg[off]), off + 1, nil
|
||||
}
|
||||
|
||||
func packUint8(i uint8, msg []byte, off int) (off1 int, err error) {
|
||||
if off+1 > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing uint8"}
|
||||
}
|
||||
msg[off] = byte(i)
|
||||
return off + 1, nil
|
||||
}
|
||||
|
||||
func unpackUint16(msg []byte, off int) (i uint16, off1 int, err error) {
|
||||
if off+2 > len(msg) {
|
||||
return 0, len(msg), &Error{err: "overflow unpacking uint16"}
|
||||
}
|
||||
return binary.BigEndian.Uint16(msg[off:]), off + 2, nil
|
||||
}
|
||||
|
||||
func packUint16(i uint16, msg []byte, off int) (off1 int, err error) {
|
||||
if off+2 > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing uint16"}
|
||||
}
|
||||
binary.BigEndian.PutUint16(msg[off:], i)
|
||||
return off + 2, nil
|
||||
}
|
||||
|
||||
func unpackUint32(msg []byte, off int) (i uint32, off1 int, err error) {
|
||||
if off+4 > len(msg) {
|
||||
return 0, len(msg), &Error{err: "overflow unpacking uint32"}
|
||||
}
|
||||
return binary.BigEndian.Uint32(msg[off:]), off + 4, nil
|
||||
}
|
||||
|
||||
func packUint32(i uint32, msg []byte, off int) (off1 int, err error) {
|
||||
if off+4 > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing uint32"}
|
||||
}
|
||||
binary.BigEndian.PutUint32(msg[off:], i)
|
||||
return off + 4, nil
|
||||
}
|
||||
|
||||
func unpackUint48(msg []byte, off int) (i uint64, off1 int, err error) {
|
||||
if off+6 > len(msg) {
|
||||
return 0, len(msg), &Error{err: "overflow unpacking uint64 as uint48"}
|
||||
}
|
||||
// Used in TSIG where the last 48 bits are occupied, so for now, assume a uint48 (6 bytes)
|
||||
i = (uint64(uint64(msg[off])<<40 | uint64(msg[off+1])<<32 | uint64(msg[off+2])<<24 | uint64(msg[off+3])<<16 |
|
||||
uint64(msg[off+4])<<8 | uint64(msg[off+5])))
|
||||
off += 6
|
||||
return i, off, nil
|
||||
}
|
||||
|
||||
func packUint48(i uint64, msg []byte, off int) (off1 int, err error) {
|
||||
if off+6 > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing uint64 as uint48"}
|
||||
}
|
||||
msg[off] = byte(i >> 40)
|
||||
msg[off+1] = byte(i >> 32)
|
||||
msg[off+2] = byte(i >> 24)
|
||||
msg[off+3] = byte(i >> 16)
|
||||
msg[off+4] = byte(i >> 8)
|
||||
msg[off+5] = byte(i)
|
||||
off += 6
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackUint64(msg []byte, off int) (i uint64, off1 int, err error) {
|
||||
if off+8 > len(msg) {
|
||||
return 0, len(msg), &Error{err: "overflow unpacking uint64"}
|
||||
}
|
||||
return binary.BigEndian.Uint64(msg[off:]), off + 8, nil
|
||||
}
|
||||
|
||||
func packUint64(i uint64, msg []byte, off int) (off1 int, err error) {
|
||||
if off+8 > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing uint64"}
|
||||
}
|
||||
binary.BigEndian.PutUint64(msg[off:], i)
|
||||
off += 8
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackString(msg []byte, off int) (string, int, error) {
|
||||
if off+1 > len(msg) {
|
||||
return "", off, &Error{err: "overflow unpacking txt"}
|
||||
}
|
||||
l := int(msg[off])
|
||||
if off+l+1 > len(msg) {
|
||||
return "", off, &Error{err: "overflow unpacking txt"}
|
||||
}
|
||||
s := make([]byte, 0, l)
|
||||
for _, b := range msg[off+1 : off+1+l] {
|
||||
switch b {
|
||||
case '"', '\\':
|
||||
s = append(s, '\\', b)
|
||||
default:
|
||||
if b < 32 || b > 127 { // unprintable
|
||||
var buf [3]byte
|
||||
bufs := strconv.AppendInt(buf[:0], int64(b), 10)
|
||||
s = append(s, '\\')
|
||||
for i := 0; i < 3-len(bufs); i++ {
|
||||
s = append(s, '0')
|
||||
}
|
||||
for _, r := range bufs {
|
||||
s = append(s, r)
|
||||
}
|
||||
} else {
|
||||
s = append(s, b)
|
||||
}
|
||||
}
|
||||
}
|
||||
off += 1 + l
|
||||
return string(s), off, nil
|
||||
}
|
||||
|
||||
func packString(s string, msg []byte, off int) (int, error) {
|
||||
txtTmp := make([]byte, 256*4+1)
|
||||
off, err := packTxtString(s, msg, off, txtTmp)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackStringBase32(msg []byte, off, end int) (string, int, error) {
|
||||
if end > len(msg) {
|
||||
return "", len(msg), &Error{err: "overflow unpacking base32"}
|
||||
}
|
||||
s := toBase32(msg[off:end])
|
||||
return s, end, nil
|
||||
}
|
||||
|
||||
func packStringBase32(s string, msg []byte, off int) (int, error) {
|
||||
b32, err := fromBase32([]byte(s))
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
if off+len(b32) > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing base32"}
|
||||
}
|
||||
copy(msg[off:off+len(b32)], b32)
|
||||
off += len(b32)
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackStringBase64(msg []byte, off, end int) (string, int, error) {
|
||||
// Rest of the RR is base64 encoded value, so we don't need an explicit length
|
||||
// to be set. Thus far all RR's that have base64 encoded fields have those as their
|
||||
// last one. What we do need is the end of the RR!
|
||||
if end > len(msg) {
|
||||
return "", len(msg), &Error{err: "overflow unpacking base64"}
|
||||
}
|
||||
s := toBase64(msg[off:end])
|
||||
return s, end, nil
|
||||
}
|
||||
|
||||
func packStringBase64(s string, msg []byte, off int) (int, error) {
|
||||
b64, err := fromBase64([]byte(s))
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
if off+len(b64) > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing base64"}
|
||||
}
|
||||
copy(msg[off:off+len(b64)], b64)
|
||||
off += len(b64)
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackStringHex(msg []byte, off, end int) (string, int, error) {
|
||||
// Rest of the RR is hex encoded value, so we don't need an explicit length
|
||||
// to be set. NSEC and TSIG have hex fields with a length field.
|
||||
// What we do need is the end of the RR!
|
||||
if end > len(msg) {
|
||||
return "", len(msg), &Error{err: "overflow unpacking hex"}
|
||||
}
|
||||
|
||||
s := hex.EncodeToString(msg[off:end])
|
||||
return s, end, nil
|
||||
}
|
||||
|
||||
func packStringHex(s string, msg []byte, off int) (int, error) {
|
||||
h, err := hex.DecodeString(s)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
if off+(len(h)) > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing hex"}
|
||||
}
|
||||
copy(msg[off:off+len(h)], h)
|
||||
off += len(h)
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackStringTxt(msg []byte, off int) ([]string, int, error) {
|
||||
txt, off, err := unpackTxt(msg, off)
|
||||
if err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
return txt, off, nil
|
||||
}
|
||||
|
||||
func packStringTxt(s []string, msg []byte, off int) (int, error) {
|
||||
txtTmp := make([]byte, 256*4+1) // If the whole string consists out of \DDD we need this many.
|
||||
off, err := packTxt(s, msg, off, txtTmp)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackDataOpt(msg []byte, off int) ([]EDNS0, int, error) {
|
||||
var edns []EDNS0
|
||||
Option:
|
||||
code := uint16(0)
|
||||
if off+4 > len(msg) {
|
||||
return nil, len(msg), &Error{err: "overflow unpacking opt"}
|
||||
}
|
||||
code = binary.BigEndian.Uint16(msg[off:])
|
||||
off += 2
|
||||
optlen := binary.BigEndian.Uint16(msg[off:])
|
||||
off += 2
|
||||
if off+int(optlen) > len(msg) {
|
||||
return nil, len(msg), &Error{err: "overflow unpacking opt"}
|
||||
}
|
||||
switch code {
|
||||
case EDNS0NSID:
|
||||
e := new(EDNS0_NSID)
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
case EDNS0SUBNET:
|
||||
e := new(EDNS0_SUBNET)
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
case EDNS0COOKIE:
|
||||
e := new(EDNS0_COOKIE)
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
case EDNS0UL:
|
||||
e := new(EDNS0_UL)
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
case EDNS0LLQ:
|
||||
e := new(EDNS0_LLQ)
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
case EDNS0DAU:
|
||||
e := new(EDNS0_DAU)
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
case EDNS0DHU:
|
||||
e := new(EDNS0_DHU)
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
case EDNS0N3U:
|
||||
e := new(EDNS0_N3U)
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
case EDNS0PADDING:
|
||||
e := new(EDNS0_PADDING)
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
default:
|
||||
e := new(EDNS0_LOCAL)
|
||||
e.Code = code
|
||||
if err := e.unpack(msg[off : off+int(optlen)]); err != nil {
|
||||
return nil, len(msg), err
|
||||
}
|
||||
edns = append(edns, e)
|
||||
off += int(optlen)
|
||||
}
|
||||
|
||||
if off < len(msg) {
|
||||
goto Option
|
||||
}
|
||||
|
||||
return edns, off, nil
|
||||
}
|
||||
|
||||
func packDataOpt(options []EDNS0, msg []byte, off int) (int, error) {
|
||||
for _, el := range options {
|
||||
b, err := el.pack()
|
||||
if err != nil || off+3 > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing opt"}
|
||||
}
|
||||
binary.BigEndian.PutUint16(msg[off:], el.Option()) // Option code
|
||||
binary.BigEndian.PutUint16(msg[off+2:], uint16(len(b))) // Length
|
||||
off += 4
|
||||
if off+len(b) > len(msg) {
|
||||
copy(msg[off:], b)
|
||||
off = len(msg)
|
||||
continue
|
||||
}
|
||||
// Actual data
|
||||
copy(msg[off:off+len(b)], b)
|
||||
off += len(b)
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackStringOctet(msg []byte, off int) (string, int, error) {
|
||||
s := string(msg[off:])
|
||||
return s, len(msg), nil
|
||||
}
|
||||
|
||||
func packStringOctet(s string, msg []byte, off int) (int, error) {
|
||||
txtTmp := make([]byte, 256*4+1)
|
||||
off, err := packOctetString(s, msg, off, txtTmp)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackDataNsec(msg []byte, off int) ([]uint16, int, error) {
|
||||
var nsec []uint16
|
||||
length, window, lastwindow := 0, 0, -1
|
||||
for off < len(msg) {
|
||||
if off+2 > len(msg) {
|
||||
return nsec, len(msg), &Error{err: "overflow unpacking nsecx"}
|
||||
}
|
||||
window = int(msg[off])
|
||||
length = int(msg[off+1])
|
||||
off += 2
|
||||
if window <= lastwindow {
|
||||
// RFC 4034: Blocks are present in the NSEC RR RDATA in
|
||||
// increasing numerical order.
|
||||
return nsec, len(msg), &Error{err: "out of order NSEC block"}
|
||||
}
|
||||
if length == 0 {
|
||||
// RFC 4034: Blocks with no types present MUST NOT be included.
|
||||
return nsec, len(msg), &Error{err: "empty NSEC block"}
|
||||
}
|
||||
if length > 32 {
|
||||
return nsec, len(msg), &Error{err: "NSEC block too long"}
|
||||
}
|
||||
if off+length > len(msg) {
|
||||
return nsec, len(msg), &Error{err: "overflowing NSEC block"}
|
||||
}
|
||||
|
||||
// Walk the bytes in the window and extract the type bits
|
||||
for j := 0; j < length; j++ {
|
||||
b := msg[off+j]
|
||||
// Check the bits one by one, and set the type
|
||||
if b&0x80 == 0x80 {
|
||||
nsec = append(nsec, uint16(window*256+j*8+0))
|
||||
}
|
||||
if b&0x40 == 0x40 {
|
||||
nsec = append(nsec, uint16(window*256+j*8+1))
|
||||
}
|
||||
if b&0x20 == 0x20 {
|
||||
nsec = append(nsec, uint16(window*256+j*8+2))
|
||||
}
|
||||
if b&0x10 == 0x10 {
|
||||
nsec = append(nsec, uint16(window*256+j*8+3))
|
||||
}
|
||||
if b&0x8 == 0x8 {
|
||||
nsec = append(nsec, uint16(window*256+j*8+4))
|
||||
}
|
||||
if b&0x4 == 0x4 {
|
||||
nsec = append(nsec, uint16(window*256+j*8+5))
|
||||
}
|
||||
if b&0x2 == 0x2 {
|
||||
nsec = append(nsec, uint16(window*256+j*8+6))
|
||||
}
|
||||
if b&0x1 == 0x1 {
|
||||
nsec = append(nsec, uint16(window*256+j*8+7))
|
||||
}
|
||||
}
|
||||
off += length
|
||||
lastwindow = window
|
||||
}
|
||||
return nsec, off, nil
|
||||
}
|
||||
|
||||
func packDataNsec(bitmap []uint16, msg []byte, off int) (int, error) {
|
||||
if len(bitmap) == 0 {
|
||||
return off, nil
|
||||
}
|
||||
var lastwindow, lastlength uint16
|
||||
for j := 0; j < len(bitmap); j++ {
|
||||
t := bitmap[j]
|
||||
window := t / 256
|
||||
length := (t-window*256)/8 + 1
|
||||
if window > lastwindow && lastlength != 0 { // New window, jump to the new offset
|
||||
off += int(lastlength) + 2
|
||||
lastlength = 0
|
||||
}
|
||||
if window < lastwindow || length < lastlength {
|
||||
return len(msg), &Error{err: "nsec bits out of order"}
|
||||
}
|
||||
if off+2+int(length) > len(msg) {
|
||||
return len(msg), &Error{err: "overflow packing nsec"}
|
||||
}
|
||||
// Setting the window #
|
||||
msg[off] = byte(window)
|
||||
// Setting the octets length
|
||||
msg[off+1] = byte(length)
|
||||
// Setting the bit value for the type in the right octet
|
||||
msg[off+1+int(length)] |= byte(1 << (7 - (t % 8)))
|
||||
lastwindow, lastlength = window, length
|
||||
}
|
||||
off += int(lastlength) + 2
|
||||
return off, nil
|
||||
}
|
||||
|
||||
func unpackDataDomainNames(msg []byte, off, end int) ([]string, int, error) {
|
||||
var (
|
||||
servers []string
|
||||
s string
|
||||
err error
|
||||
)
|
||||
if end > len(msg) {
|
||||
return nil, len(msg), &Error{err: "overflow unpacking domain names"}
|
||||
}
|
||||
for off < end {
|
||||
s, off, err = UnpackDomainName(msg, off)
|
||||
if err != nil {
|
||||
return servers, len(msg), err
|
||||
}
|
||||
servers = append(servers, s)
|
||||
}
|
||||
return servers, off, nil
|
||||
}
|
||||
|
||||
func packDataDomainNames(names []string, msg []byte, off int, compression map[string]int, compress bool) (int, error) {
|
||||
var err error
|
||||
for j := 0; j < len(names); j++ {
|
||||
off, err = PackDomainName(names[j], msg, off, compression, false && compress)
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
106
vendor/github.com/miekg/dns/nsecx.go
generated
vendored
Normal file
106
vendor/github.com/miekg/dns/nsecx.go
generated
vendored
Normal file
@ -0,0 +1,106 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"crypto/sha1"
|
||||
"hash"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type saltWireFmt struct {
|
||||
Salt string `dns:"size-hex"`
|
||||
}
|
||||
|
||||
// HashName hashes a string (label) according to RFC 5155. It returns the hashed string in uppercase.
|
||||
func HashName(label string, ha uint8, iter uint16, salt string) string {
|
||||
saltwire := new(saltWireFmt)
|
||||
saltwire.Salt = salt
|
||||
wire := make([]byte, DefaultMsgSize)
|
||||
n, err := packSaltWire(saltwire, wire)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
wire = wire[:n]
|
||||
name := make([]byte, 255)
|
||||
off, err := PackDomainName(strings.ToLower(label), name, 0, nil, false)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
name = name[:off]
|
||||
var s hash.Hash
|
||||
switch ha {
|
||||
case SHA1:
|
||||
s = sha1.New()
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
|
||||
// k = 0
|
||||
s.Write(name)
|
||||
s.Write(wire)
|
||||
nsec3 := s.Sum(nil)
|
||||
// k > 0
|
||||
for k := uint16(0); k < iter; k++ {
|
||||
s.Reset()
|
||||
s.Write(nsec3)
|
||||
s.Write(wire)
|
||||
nsec3 = s.Sum(nsec3[:0])
|
||||
}
|
||||
return toBase32(nsec3)
|
||||
}
|
||||
|
||||
// Cover returns true if a name is covered by the NSEC3 record
|
||||
func (rr *NSEC3) Cover(name string) bool {
|
||||
nameHash := HashName(name, rr.Hash, rr.Iterations, rr.Salt)
|
||||
owner := strings.ToUpper(rr.Hdr.Name)
|
||||
labelIndices := Split(owner)
|
||||
if len(labelIndices) < 2 {
|
||||
return false
|
||||
}
|
||||
ownerHash := owner[:labelIndices[1]-1]
|
||||
ownerZone := owner[labelIndices[1]:]
|
||||
if !IsSubDomain(ownerZone, strings.ToUpper(name)) { // name is outside owner zone
|
||||
return false
|
||||
}
|
||||
|
||||
nextHash := rr.NextDomain
|
||||
if ownerHash == nextHash { // empty interval
|
||||
return false
|
||||
}
|
||||
if ownerHash > nextHash { // end of zone
|
||||
if nameHash > ownerHash { // covered since there is nothing after ownerHash
|
||||
return true
|
||||
}
|
||||
return nameHash < nextHash // if nameHash is before beginning of zone it is covered
|
||||
}
|
||||
if nameHash < ownerHash { // nameHash is before ownerHash, not covered
|
||||
return false
|
||||
}
|
||||
return nameHash < nextHash // if nameHash is before nextHash is it covered (between ownerHash and nextHash)
|
||||
}
|
||||
|
||||
// Match returns true if a name matches the NSEC3 record
|
||||
func (rr *NSEC3) Match(name string) bool {
|
||||
nameHash := HashName(name, rr.Hash, rr.Iterations, rr.Salt)
|
||||
owner := strings.ToUpper(rr.Hdr.Name)
|
||||
labelIndices := Split(owner)
|
||||
if len(labelIndices) < 2 {
|
||||
return false
|
||||
}
|
||||
ownerHash := owner[:labelIndices[1]-1]
|
||||
ownerZone := owner[labelIndices[1]:]
|
||||
if !IsSubDomain(ownerZone, strings.ToUpper(name)) { // name is outside owner zone
|
||||
return false
|
||||
}
|
||||
if ownerHash == nameHash {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func packSaltWire(sw *saltWireFmt, msg []byte) (int, error) {
|
||||
off, err := packStringHex(sw.Salt, msg, 0)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
return off, nil
|
||||
}
|
||||
149
vendor/github.com/miekg/dns/privaterr.go
generated
vendored
Normal file
149
vendor/github.com/miekg/dns/privaterr.go
generated
vendored
Normal file
@ -0,0 +1,149 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// PrivateRdata is an interface used for implementing "Private Use" RR types, see
|
||||
// RFC 6895. This allows one to experiment with new RR types, without requesting an
|
||||
// official type code. Also see dns.PrivateHandle and dns.PrivateHandleRemove.
|
||||
type PrivateRdata interface {
|
||||
// String returns the text presentaton of the Rdata of the Private RR.
|
||||
String() string
|
||||
// Parse parses the Rdata of the private RR.
|
||||
Parse([]string) error
|
||||
// Pack is used when packing a private RR into a buffer.
|
||||
Pack([]byte) (int, error)
|
||||
// Unpack is used when unpacking a private RR from a buffer.
|
||||
// TODO(miek): diff. signature than Pack, see edns0.go for instance.
|
||||
Unpack([]byte) (int, error)
|
||||
// Copy copies the Rdata.
|
||||
Copy(PrivateRdata) error
|
||||
// Len returns the length in octets of the Rdata.
|
||||
Len() int
|
||||
}
|
||||
|
||||
// PrivateRR represents an RR that uses a PrivateRdata user-defined type.
|
||||
// It mocks normal RRs and implements dns.RR interface.
|
||||
type PrivateRR struct {
|
||||
Hdr RR_Header
|
||||
Data PrivateRdata
|
||||
}
|
||||
|
||||
func mkPrivateRR(rrtype uint16) *PrivateRR {
|
||||
// Panics if RR is not an instance of PrivateRR.
|
||||
rrfunc, ok := TypeToRR[rrtype]
|
||||
if !ok {
|
||||
panic(fmt.Sprintf("dns: invalid operation with Private RR type %d", rrtype))
|
||||
}
|
||||
|
||||
anyrr := rrfunc()
|
||||
switch rr := anyrr.(type) {
|
||||
case *PrivateRR:
|
||||
return rr
|
||||
}
|
||||
panic(fmt.Sprintf("dns: RR is not a PrivateRR, TypeToRR[%d] generator returned %T", rrtype, anyrr))
|
||||
}
|
||||
|
||||
// Header return the RR header of r.
|
||||
func (r *PrivateRR) Header() *RR_Header { return &r.Hdr }
|
||||
|
||||
func (r *PrivateRR) String() string { return r.Hdr.String() + r.Data.String() }
|
||||
|
||||
// Private len and copy parts to satisfy RR interface.
|
||||
func (r *PrivateRR) len() int { return r.Hdr.len() + r.Data.Len() }
|
||||
func (r *PrivateRR) copy() RR {
|
||||
// make new RR like this:
|
||||
rr := mkPrivateRR(r.Hdr.Rrtype)
|
||||
newh := r.Hdr.copyHeader()
|
||||
rr.Hdr = *newh
|
||||
|
||||
err := r.Data.Copy(rr.Data)
|
||||
if err != nil {
|
||||
panic("dns: got value that could not be used to copy Private rdata")
|
||||
}
|
||||
return rr
|
||||
}
|
||||
func (r *PrivateRR) pack(msg []byte, off int, compression map[string]int, compress bool) (int, error) {
|
||||
off, err := r.Hdr.pack(msg, off, compression, compress)
|
||||
if err != nil {
|
||||
return off, err
|
||||
}
|
||||
headerEnd := off
|
||||
n, err := r.Data.Pack(msg[off:])
|
||||
if err != nil {
|
||||
return len(msg), err
|
||||
}
|
||||
off += n
|
||||
r.Header().Rdlength = uint16(off - headerEnd)
|
||||
return off, nil
|
||||
}
|
||||
|
||||
// PrivateHandle registers a private resource record type. It requires
|
||||
// string and numeric representation of private RR type and generator function as argument.
|
||||
func PrivateHandle(rtypestr string, rtype uint16, generator func() PrivateRdata) {
|
||||
rtypestr = strings.ToUpper(rtypestr)
|
||||
|
||||
TypeToRR[rtype] = func() RR { return &PrivateRR{RR_Header{}, generator()} }
|
||||
TypeToString[rtype] = rtypestr
|
||||
StringToType[rtypestr] = rtype
|
||||
|
||||
typeToUnpack[rtype] = func(h RR_Header, msg []byte, off int) (RR, int, error) {
|
||||
if noRdata(h) {
|
||||
return &h, off, nil
|
||||
}
|
||||
var err error
|
||||
|
||||
rr := mkPrivateRR(h.Rrtype)
|
||||
rr.Hdr = h
|
||||
|
||||
off1, err := rr.Data.Unpack(msg[off:])
|
||||
off += off1
|
||||
if err != nil {
|
||||
return rr, off, err
|
||||
}
|
||||
return rr, off, err
|
||||
}
|
||||
|
||||
setPrivateRR := func(h RR_Header, c chan lex, o, f string) (RR, *ParseError, string) {
|
||||
rr := mkPrivateRR(h.Rrtype)
|
||||
rr.Hdr = h
|
||||
|
||||
var l lex
|
||||
text := make([]string, 0, 2) // could be 0..N elements, median is probably 1
|
||||
Fetch:
|
||||
for {
|
||||
// TODO(miek): we could also be returning _QUOTE, this might or might not
|
||||
// be an issue (basically parsing TXT becomes hard)
|
||||
switch l = <-c; l.value {
|
||||
case zNewline, zEOF:
|
||||
break Fetch
|
||||
case zString:
|
||||
text = append(text, l.token)
|
||||
}
|
||||
}
|
||||
|
||||
err := rr.Data.Parse(text)
|
||||
if err != nil {
|
||||
return nil, &ParseError{f, err.Error(), l}, ""
|
||||
}
|
||||
|
||||
return rr, nil, ""
|
||||
}
|
||||
|
||||
typeToparserFunc[rtype] = parserFunc{setPrivateRR, true}
|
||||
}
|
||||
|
||||
// PrivateHandleRemove removes defenitions required to support private RR type.
|
||||
func PrivateHandleRemove(rtype uint16) {
|
||||
rtypestr, ok := TypeToString[rtype]
|
||||
if ok {
|
||||
delete(TypeToRR, rtype)
|
||||
delete(TypeToString, rtype)
|
||||
delete(typeToparserFunc, rtype)
|
||||
delete(StringToType, rtypestr)
|
||||
delete(typeToUnpack, rtype)
|
||||
}
|
||||
return
|
||||
}
|
||||
49
vendor/github.com/miekg/dns/rawmsg.go
generated
vendored
Normal file
49
vendor/github.com/miekg/dns/rawmsg.go
generated
vendored
Normal file
@ -0,0 +1,49 @@
|
||||
package dns
|
||||
|
||||
import "encoding/binary"
|
||||
|
||||
// rawSetRdlength sets the rdlength in the header of
|
||||
// the RR. The offset 'off' must be positioned at the
|
||||
// start of the header of the RR, 'end' must be the
|
||||
// end of the RR.
|
||||
func rawSetRdlength(msg []byte, off, end int) bool {
|
||||
l := len(msg)
|
||||
Loop:
|
||||
for {
|
||||
if off+1 > l {
|
||||
return false
|
||||
}
|
||||
c := int(msg[off])
|
||||
off++
|
||||
switch c & 0xC0 {
|
||||
case 0x00:
|
||||
if c == 0x00 {
|
||||
// End of the domainname
|
||||
break Loop
|
||||
}
|
||||
if off+c > l {
|
||||
return false
|
||||
}
|
||||
off += c
|
||||
|
||||
case 0xC0:
|
||||
// pointer, next byte included, ends domainname
|
||||
off++
|
||||
break Loop
|
||||
}
|
||||
}
|
||||
// The domainname has been seen, we at the start of the fixed part in the header.
|
||||
// Type is 2 bytes, class is 2 bytes, ttl 4 and then 2 bytes for the length.
|
||||
off += 2 + 2 + 4
|
||||
if off+2 > l {
|
||||
return false
|
||||
}
|
||||
//off+1 is the end of the header, 'end' is the end of the rr
|
||||
//so 'end' - 'off+2' is the length of the rdata
|
||||
rdatalen := end - (off + 2)
|
||||
if rdatalen > 0xFFFF {
|
||||
return false
|
||||
}
|
||||
binary.BigEndian.PutUint16(msg[off:], uint16(rdatalen))
|
||||
return true
|
||||
}
|
||||
38
vendor/github.com/miekg/dns/reverse.go
generated
vendored
Normal file
38
vendor/github.com/miekg/dns/reverse.go
generated
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
package dns
|
||||
|
||||
// StringToType is the reverse of TypeToString, needed for string parsing.
|
||||
var StringToType = reverseInt16(TypeToString)
|
||||
|
||||
// StringToClass is the reverse of ClassToString, needed for string parsing.
|
||||
var StringToClass = reverseInt16(ClassToString)
|
||||
|
||||
// StringToOpcode is a map of opcodes to strings.
|
||||
var StringToOpcode = reverseInt(OpcodeToString)
|
||||
|
||||
// StringToRcode is a map of rcodes to strings.
|
||||
var StringToRcode = reverseInt(RcodeToString)
|
||||
|
||||
// Reverse a map
|
||||
func reverseInt8(m map[uint8]string) map[string]uint8 {
|
||||
n := make(map[string]uint8, len(m))
|
||||
for u, s := range m {
|
||||
n[s] = u
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func reverseInt16(m map[uint16]string) map[string]uint16 {
|
||||
n := make(map[string]uint16, len(m))
|
||||
for u, s := range m {
|
||||
n[s] = u
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func reverseInt(m map[int]string) map[string]int {
|
||||
n := make(map[string]int, len(m))
|
||||
for u, s := range m {
|
||||
n[s] = u
|
||||
}
|
||||
return n
|
||||
}
|
||||
84
vendor/github.com/miekg/dns/sanitize.go
generated
vendored
Normal file
84
vendor/github.com/miekg/dns/sanitize.go
generated
vendored
Normal file
@ -0,0 +1,84 @@
|
||||
package dns
|
||||
|
||||
// Dedup removes identical RRs from rrs. It preserves the original ordering.
|
||||
// The lowest TTL of any duplicates is used in the remaining one. Dedup modifies
|
||||
// rrs.
|
||||
// m is used to store the RRs temporary. If it is nil a new map will be allocated.
|
||||
func Dedup(rrs []RR, m map[string]RR) []RR {
|
||||
if m == nil {
|
||||
m = make(map[string]RR)
|
||||
}
|
||||
// Save the keys, so we don't have to call normalizedString twice.
|
||||
keys := make([]*string, 0, len(rrs))
|
||||
|
||||
for _, r := range rrs {
|
||||
key := normalizedString(r)
|
||||
keys = append(keys, &key)
|
||||
if _, ok := m[key]; ok {
|
||||
// Shortest TTL wins.
|
||||
if m[key].Header().Ttl > r.Header().Ttl {
|
||||
m[key].Header().Ttl = r.Header().Ttl
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
m[key] = r
|
||||
}
|
||||
// If the length of the result map equals the amount of RRs we got,
|
||||
// it means they were all different. We can then just return the original rrset.
|
||||
if len(m) == len(rrs) {
|
||||
return rrs
|
||||
}
|
||||
|
||||
j := 0
|
||||
for i, r := range rrs {
|
||||
// If keys[i] lives in the map, we should copy and remove it.
|
||||
if _, ok := m[*keys[i]]; ok {
|
||||
delete(m, *keys[i])
|
||||
rrs[j] = r
|
||||
j++
|
||||
}
|
||||
|
||||
if len(m) == 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return rrs[:j]
|
||||
}
|
||||
|
||||
// normalizedString returns a normalized string from r. The TTL
|
||||
// is removed and the domain name is lowercased. We go from this:
|
||||
// DomainName<TAB>TTL<TAB>CLASS<TAB>TYPE<TAB>RDATA to:
|
||||
// lowercasename<TAB>CLASS<TAB>TYPE...
|
||||
func normalizedString(r RR) string {
|
||||
// A string Go DNS makes has: domainname<TAB>TTL<TAB>...
|
||||
b := []byte(r.String())
|
||||
|
||||
// find the first non-escaped tab, then another, so we capture where the TTL lives.
|
||||
esc := false
|
||||
ttlStart, ttlEnd := 0, 0
|
||||
for i := 0; i < len(b) && ttlEnd == 0; i++ {
|
||||
switch {
|
||||
case b[i] == '\\':
|
||||
esc = !esc
|
||||
case b[i] == '\t' && !esc:
|
||||
if ttlStart == 0 {
|
||||
ttlStart = i
|
||||
continue
|
||||
}
|
||||
if ttlEnd == 0 {
|
||||
ttlEnd = i
|
||||
}
|
||||
case b[i] >= 'A' && b[i] <= 'Z' && !esc:
|
||||
b[i] += 32
|
||||
default:
|
||||
esc = false
|
||||
}
|
||||
}
|
||||
|
||||
// remove TTL.
|
||||
copy(b[ttlStart:], b[ttlEnd:])
|
||||
cut := ttlEnd - ttlStart
|
||||
return string(b[:len(b)-cut])
|
||||
}
|
||||
1007
vendor/github.com/miekg/dns/scan.go
generated
vendored
Normal file
1007
vendor/github.com/miekg/dns/scan.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
2199
vendor/github.com/miekg/dns/scan_rr.go
generated
vendored
Normal file
2199
vendor/github.com/miekg/dns/scan_rr.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user