mirror of
https://github.com/Oxalide/vsphere-influxdb-go.git
synced 2023-10-10 11:36:51 +00:00
add vendoring with go dep
This commit is contained in:
192
vendor/github.com/influxdata/influxdb/services/graphite/README.md
generated
vendored
Normal file
192
vendor/github.com/influxdata/influxdb/services/graphite/README.md
generated
vendored
Normal file
@@ -0,0 +1,192 @@
|
||||
# The Graphite Input
|
||||
|
||||
## A Note On UDP/IP OS Buffer Sizes
|
||||
|
||||
If you're using UDP input and running Linux or FreeBSD, please adjust your UDP buffer
|
||||
size limit, [see here for more details.](../udp/README.md#a-note-on-udpip-os-buffer-sizes)
|
||||
|
||||
## Configuration
|
||||
|
||||
Each Graphite input allows the binding address, target database, and protocol to be set. If the database does not exist, it will be created automatically when the input is initialized. The write-consistency-level can also be set. If any write operations do not meet the configured consistency guarantees, an error will occur and the data will not be indexed. The default consistency-level is `ONE`.
|
||||
|
||||
Each Graphite input also performs internal batching of the points it receives, as batched writes to the database are more efficient. The default _batch size_ is 1000, _pending batch_ factor is 5, with a _batch timeout_ of 1 second. This means the input will write batches of maximum size 1000, but if a batch has not reached 1000 points within 1 second of the first point being added to a batch, it will emit that batch regardless of size. The pending batch factor controls how many batches can be in memory at once, allowing the input to transmit a batch, while still building other batches.
|
||||
|
||||
## Parsing Metrics
|
||||
|
||||
The Graphite plugin allows measurements to be saved using the Graphite line protocol. By default, enabling the Graphite plugin will allow you to collect metrics and store them using the metric name as the measurement. If you send a metric named `servers.localhost.cpu.loadavg.10`, it will store the full metric name as the measurement with no extracted tags.
|
||||
|
||||
While this default setup works, it is not the ideal way to store measurements in InfluxDB since it does not take advantage of tags. It also will not perform optimally with large dataset sizes since queries will be forced to use regexes which is known to not scale well.
|
||||
|
||||
To extract tags from metrics, one or more templates must be configured to parse metrics into tags and measurements.
|
||||
|
||||
## Templates
|
||||
|
||||
Templates allow matching parts of a metric name to be used as tag keys in the stored metric. They have a similar format to Graphite metric names. The values in between the separators are used as the tag keys. The location of the tag key that matches the same position as the Graphite metric section is used as the value. If there is no value, the Graphite portion is skipped.
|
||||
|
||||
The special value _measurement_ is used to define the measurement name. It can have a trailing `*` to indicate that the remainder of the metric should be used. If a _measurement_ is not specified, the full metric name is used.
|
||||
|
||||
### Basic Matching
|
||||
|
||||
`servers.localhost.cpu.loadavg.10`
|
||||
* Template: `.host.resource.measurement*`
|
||||
* Output: _measurement_ =`loadavg.10` _tags_ =`host=localhost resource=cpu`
|
||||
|
||||
### Multiple Measurement & Tags Matching
|
||||
|
||||
The _measurement_ can be specified multiple times in a template to provide more control over the measurement name. Tags can also be
|
||||
matched multiple times. Multiple values will be joined together using the _Separator_ config variable. By default, this value is `.`.
|
||||
|
||||
`servers.localhost.localdomain.cpu.cpu0.user`
|
||||
* Template: `.host.host.measurement.cpu.measurement`
|
||||
* Output: _measurement_ = `cpu.user` _tags_ = `host=localhost.localdomain cpu=cpu0`
|
||||
|
||||
Since `.` requires queries on measurements to be double-quoted, you may want to set this to `_` to simplify querying parsed metrics.
|
||||
|
||||
`servers.localhost.cpu.cpu0.user`
|
||||
* Separator: `_`
|
||||
* Template: `.host.measurement.cpu.measurement`
|
||||
* Output: _measurement_ = `cpu_user` _tags_ = `host=localhost cpu=cpu0`
|
||||
|
||||
### Adding Tags
|
||||
|
||||
Additional tags can be added to a metric if they don't exist on the received metric. You can add additional tags by specifying them after the pattern. Tags have the same format as the line protocol. Multiple tags are separated by commas.
|
||||
|
||||
`servers.localhost.cpu.loadavg.10`
|
||||
* Template: `.host.resource.measurement* region=us-west,zone=1a`
|
||||
* Output: _measurement_ = `loadavg.10` _tags_ = `host=localhost resource=cpu region=us-west zone=1a`
|
||||
|
||||
### Fields
|
||||
|
||||
A field key can be specified by using the keyword _field_. By default if no _field_ keyword is specified then the metric will be written to a field named _value_.
|
||||
|
||||
The field key can also be derived from the second "half" of the input metric-name by specifying ```field*``` (eg ```measurement.measurement.field*```). This cannot be used in conjunction with "measurement*"!
|
||||
|
||||
It's possible to amend measurement metrics with additional fields, e.g:
|
||||
|
||||
Input:
|
||||
```
|
||||
sensu.metric.net.server0.eth0.rx_packets 461295119435 1444234982
|
||||
sensu.metric.net.server0.eth0.tx_bytes 1093086493388480 1444234982
|
||||
sensu.metric.net.server0.eth0.rx_bytes 1015633926034834 1444234982
|
||||
sensu.metric.net.server0.eth0.tx_errors 0 1444234982
|
||||
sensu.metric.net.server0.eth0.rx_errors 0 1444234982
|
||||
sensu.metric.net.server0.eth0.tx_dropped 0 1444234982
|
||||
sensu.metric.net.server0.eth0.rx_dropped 0 1444234982
|
||||
```
|
||||
|
||||
With template:
|
||||
```
|
||||
sensu.metric.* ..measurement.host.interface.field
|
||||
```
|
||||
|
||||
Becomes database entry:
|
||||
```
|
||||
> select * from net
|
||||
name: net
|
||||
---------
|
||||
time host interface rx_bytes rx_dropped rx_errors rx_packets tx_bytes tx_dropped tx_errors
|
||||
1444234982000000000 server0 eth0 1.015633926034834e+15 0 0 4.61295119435e+11 1.09308649338848e+15 0 0
|
||||
```
|
||||
|
||||
## Multiple Templates
|
||||
|
||||
One template may not match all metrics. For example, using multiple plugins with diamond will produce metrics in different formats. If you need to use multiple templates, you'll need to define a prefix filter that must match before the template can be applied.
|
||||
|
||||
### Filters
|
||||
|
||||
Filters have a similar format to templates but work more like wildcard expressions. When multiple filters would match a metric, the more specific one is chosen. Filters are configured by adding them before the template.
|
||||
|
||||
For example,
|
||||
|
||||
```
|
||||
servers.localhost.cpu.loadavg.10
|
||||
servers.host123.elasticsearch.cache_hits 100
|
||||
servers.host456.mysql.tx_count 10
|
||||
servers.host789.prod.mysql.tx_count 10
|
||||
```
|
||||
* `servers.*` would match all values
|
||||
* `servers.*.mysql` would match `servers.host456.mysql.tx_count 10`
|
||||
* `servers.localhost.*` would match `servers.localhost.cpu.loadavg`
|
||||
* `servers.*.*.mysql` would match `servers.host789.prod.mysql.tx_count 10`
|
||||
|
||||
## Default Templates
|
||||
|
||||
If no template filters are defined or you want to just have one basic template, you can define a default template. This template will apply to any metric that has not already matched a filter.
|
||||
|
||||
```
|
||||
dev.http.requests.200
|
||||
prod.myapp.errors.count
|
||||
dev.db.queries.count
|
||||
```
|
||||
|
||||
* `env.app.measurement*` would create
|
||||
* _measurement_=`requests.200` _tags_=`env=dev,app=http`
|
||||
* _measurement_= `errors.count` _tags_=`env=prod,app=myapp`
|
||||
* _measurement_=`queries.count` _tags_=`env=dev,app=db`
|
||||
|
||||
## Global Tags
|
||||
|
||||
If you need to add the same set of tags to all metrics, you can define them globally at the plugin level and not within each template description.
|
||||
|
||||
## Minimal Config
|
||||
```
|
||||
[[graphite]]
|
||||
enabled = true
|
||||
# bind-address = ":2003"
|
||||
# protocol = "tcp"
|
||||
# consistency-level = "one"
|
||||
|
||||
### If matching multiple measurement files, this string will be used to join the matched values.
|
||||
# separator = "."
|
||||
|
||||
### Default tags that will be added to all metrics. These can be overridden at the template level
|
||||
### or by tags extracted from metric
|
||||
# tags = ["region=us-east", "zone=1c"]
|
||||
|
||||
### Each template line requires a template pattern. It can have an optional
|
||||
### filter before the template and separated by spaces. It can also have optional extra
|
||||
### tags following the template. Multiple tags should be separated by commas and no spaces
|
||||
### similar to the line protocol format. The can be only one default template.
|
||||
# templates = [
|
||||
# "*.app env.service.resource.measurement",
|
||||
# # Default template
|
||||
# "server.*",
|
||||
#]
|
||||
```
|
||||
|
||||
## Customized Config
|
||||
```
|
||||
[[graphite]]
|
||||
enabled = true
|
||||
separator = "_"
|
||||
tags = ["region=us-east", "zone=1c"]
|
||||
templates = [
|
||||
# filter + template
|
||||
"*.app env.service.resource.measurement",
|
||||
|
||||
# filter + template + extra tag
|
||||
"stats.* .host.measurement* region=us-west,agent=sensu",
|
||||
|
||||
# filter + template with field key
|
||||
"stats.* .host.measurement.field",
|
||||
|
||||
# default template. Ignore the first Graphite component "servers"
|
||||
".measurement*",
|
||||
]
|
||||
```
|
||||
|
||||
## Two Graphite Listeners, UDP & TCP, Config
|
||||
|
||||
```
|
||||
[[graphite]]
|
||||
enabled = true
|
||||
bind-address = ":2003"
|
||||
protocol = "tcp"
|
||||
# consistency-level = "one"
|
||||
|
||||
[[graphite]]
|
||||
enabled = true
|
||||
bind-address = ":2004" # the bind address
|
||||
protocol = "udp" # protocol to read via
|
||||
udp-read-buffer = 8388608 # (8*1024*1024) UDP read buffer size
|
||||
```
|
288
vendor/github.com/influxdata/influxdb/services/graphite/config.go
generated
vendored
Normal file
288
vendor/github.com/influxdata/influxdb/services/graphite/config.go
generated
vendored
Normal file
@@ -0,0 +1,288 @@
|
||||
package graphite
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/influxdb/models"
|
||||
"github.com/influxdata/influxdb/monitor/diagnostics"
|
||||
"github.com/influxdata/influxdb/toml"
|
||||
)
|
||||
|
||||
const (
|
||||
// DefaultBindAddress is the default binding interface if none is specified.
|
||||
DefaultBindAddress = ":2003"
|
||||
|
||||
// DefaultDatabase is the default database if none is specified.
|
||||
DefaultDatabase = "graphite"
|
||||
|
||||
// DefaultProtocol is the default IP protocol used by the Graphite input.
|
||||
DefaultProtocol = "tcp"
|
||||
|
||||
// DefaultConsistencyLevel is the default write consistency for the Graphite input.
|
||||
DefaultConsistencyLevel = "one"
|
||||
|
||||
// DefaultSeparator is the default join character to use when joining multiple
|
||||
// measurement parts in a template.
|
||||
DefaultSeparator = "."
|
||||
|
||||
// DefaultBatchSize is the default write batch size.
|
||||
DefaultBatchSize = 5000
|
||||
|
||||
// DefaultBatchPending is the default number of pending write batches.
|
||||
DefaultBatchPending = 10
|
||||
|
||||
// DefaultBatchTimeout is the default Graphite batch timeout.
|
||||
DefaultBatchTimeout = time.Second
|
||||
|
||||
// DefaultUDPReadBuffer is the default buffer size for the UDP listener.
|
||||
// Sets the size of the operating system's receive buffer associated with
|
||||
// the UDP traffic. Keep in mind that the OS must be able
|
||||
// to handle the number set here or the UDP listener will error and exit.
|
||||
//
|
||||
// DefaultReadBuffer = 0 means to use the OS default, which is usually too
|
||||
// small for high UDP performance.
|
||||
//
|
||||
// Increasing OS buffer limits:
|
||||
// Linux: sudo sysctl -w net.core.rmem_max=<read-buffer>
|
||||
// BSD/Darwin: sudo sysctl -w kern.ipc.maxsockbuf=<read-buffer>
|
||||
DefaultUDPReadBuffer = 0
|
||||
)
|
||||
|
||||
// Config represents the configuration for Graphite endpoints.
|
||||
type Config struct {
|
||||
Enabled bool `toml:"enabled"`
|
||||
BindAddress string `toml:"bind-address"`
|
||||
Database string `toml:"database"`
|
||||
RetentionPolicy string `toml:"retention-policy"`
|
||||
Protocol string `toml:"protocol"`
|
||||
BatchSize int `toml:"batch-size"`
|
||||
BatchPending int `toml:"batch-pending"`
|
||||
BatchTimeout toml.Duration `toml:"batch-timeout"`
|
||||
ConsistencyLevel string `toml:"consistency-level"`
|
||||
Templates []string `toml:"templates"`
|
||||
Tags []string `toml:"tags"`
|
||||
Separator string `toml:"separator"`
|
||||
UDPReadBuffer int `toml:"udp-read-buffer"`
|
||||
}
|
||||
|
||||
// NewConfig returns a new instance of Config with defaults.
|
||||
func NewConfig() Config {
|
||||
return Config{
|
||||
BindAddress: DefaultBindAddress,
|
||||
Database: DefaultDatabase,
|
||||
Protocol: DefaultProtocol,
|
||||
BatchSize: DefaultBatchSize,
|
||||
BatchPending: DefaultBatchPending,
|
||||
BatchTimeout: toml.Duration(DefaultBatchTimeout),
|
||||
ConsistencyLevel: DefaultConsistencyLevel,
|
||||
Separator: DefaultSeparator,
|
||||
}
|
||||
}
|
||||
|
||||
// WithDefaults takes the given config and returns a new config with any required
|
||||
// default values set.
|
||||
func (c *Config) WithDefaults() *Config {
|
||||
d := *c
|
||||
if d.BindAddress == "" {
|
||||
d.BindAddress = DefaultBindAddress
|
||||
}
|
||||
if d.Database == "" {
|
||||
d.Database = DefaultDatabase
|
||||
}
|
||||
if d.Protocol == "" {
|
||||
d.Protocol = DefaultProtocol
|
||||
}
|
||||
if d.BatchSize == 0 {
|
||||
d.BatchSize = DefaultBatchSize
|
||||
}
|
||||
if d.BatchPending == 0 {
|
||||
d.BatchPending = DefaultBatchPending
|
||||
}
|
||||
if d.BatchTimeout == 0 {
|
||||
d.BatchTimeout = toml.Duration(DefaultBatchTimeout)
|
||||
}
|
||||
if d.ConsistencyLevel == "" {
|
||||
d.ConsistencyLevel = DefaultConsistencyLevel
|
||||
}
|
||||
if d.Separator == "" {
|
||||
d.Separator = DefaultSeparator
|
||||
}
|
||||
if d.UDPReadBuffer == 0 {
|
||||
d.UDPReadBuffer = DefaultUDPReadBuffer
|
||||
}
|
||||
return &d
|
||||
}
|
||||
|
||||
// DefaultTags returns the config's tags.
|
||||
func (c *Config) DefaultTags() models.Tags {
|
||||
m := make(map[string]string, len(c.Tags))
|
||||
for _, t := range c.Tags {
|
||||
parts := strings.Split(t, "=")
|
||||
m[parts[0]] = parts[1]
|
||||
}
|
||||
return models.NewTags(m)
|
||||
}
|
||||
|
||||
// Validate validates the config's templates and tags.
|
||||
func (c *Config) Validate() error {
|
||||
if err := c.validateTemplates(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := c.validateTags(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) validateTemplates() error {
|
||||
// map to keep track of filters we see
|
||||
filters := map[string]struct{}{}
|
||||
|
||||
for i, t := range c.Templates {
|
||||
parts := strings.Fields(t)
|
||||
// Ensure template string is non-empty
|
||||
if len(parts) == 0 {
|
||||
return fmt.Errorf("missing template at position: %d", i)
|
||||
}
|
||||
if len(parts) == 1 && parts[0] == "" {
|
||||
return fmt.Errorf("missing template at position: %d", i)
|
||||
}
|
||||
|
||||
if len(parts) > 3 {
|
||||
return fmt.Errorf("invalid template format: '%s'", t)
|
||||
}
|
||||
|
||||
template := t
|
||||
filter := ""
|
||||
tags := ""
|
||||
if len(parts) >= 2 {
|
||||
// We could have <filter> <template> or <template> <tags>. Equals is only allowed in
|
||||
// tags section.
|
||||
if strings.Contains(parts[1], "=") {
|
||||
template = parts[0]
|
||||
tags = parts[1]
|
||||
} else {
|
||||
filter = parts[0]
|
||||
template = parts[1]
|
||||
}
|
||||
}
|
||||
|
||||
if len(parts) == 3 {
|
||||
tags = parts[2]
|
||||
}
|
||||
|
||||
// Validate the template has one and only one measurement
|
||||
if err := c.validateTemplate(template); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Prevent duplicate filters in the config
|
||||
if _, ok := filters[filter]; ok {
|
||||
return fmt.Errorf("duplicate filter '%s' found at position: %d", filter, i)
|
||||
}
|
||||
filters[filter] = struct{}{}
|
||||
|
||||
if filter != "" {
|
||||
// Validate filter expression is valid
|
||||
if err := c.validateFilter(filter); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if tags != "" {
|
||||
// Validate tags
|
||||
for _, tagStr := range strings.Split(tags, ",") {
|
||||
if err := c.validateTag(tagStr); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) validateTags() error {
|
||||
for _, t := range c.Tags {
|
||||
if err := c.validateTag(t); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) validateTemplate(template string) error {
|
||||
hasMeasurement := false
|
||||
for _, p := range strings.Split(template, ".") {
|
||||
if p == "measurement" || p == "measurement*" {
|
||||
hasMeasurement = true
|
||||
}
|
||||
}
|
||||
|
||||
if !hasMeasurement {
|
||||
return fmt.Errorf("no measurement in template `%s`", template)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) validateFilter(filter string) error {
|
||||
for _, p := range strings.Split(filter, ".") {
|
||||
if p == "" {
|
||||
return fmt.Errorf("filter contains blank section: %s", filter)
|
||||
}
|
||||
|
||||
if strings.Contains(p, "*") && p != "*" {
|
||||
return fmt.Errorf("invalid filter wildcard section: %s", filter)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) validateTag(keyValue string) error {
|
||||
parts := strings.Split(keyValue, "=")
|
||||
if len(parts) != 2 {
|
||||
return fmt.Errorf("invalid template tags: '%s'", keyValue)
|
||||
}
|
||||
|
||||
if parts[0] == "" || parts[1] == "" {
|
||||
return fmt.Errorf("invalid template tags: %s'", keyValue)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Configs wraps a slice of Config to aggregate diagnostics.
|
||||
type Configs []Config
|
||||
|
||||
// Diagnostics returns one set of diagnostics for all of the Configs.
|
||||
func (c Configs) Diagnostics() (*diagnostics.Diagnostics, error) {
|
||||
d := &diagnostics.Diagnostics{
|
||||
Columns: []string{"enabled", "bind-address", "protocol", "database", "retention-policy", "batch-size", "batch-pending", "batch-timeout"},
|
||||
}
|
||||
|
||||
for _, cc := range c {
|
||||
if !cc.Enabled {
|
||||
d.AddRow([]interface{}{false})
|
||||
continue
|
||||
}
|
||||
|
||||
r := []interface{}{true, cc.BindAddress, cc.Protocol, cc.Database, cc.RetentionPolicy, cc.BatchSize, cc.BatchPending, cc.BatchTimeout}
|
||||
d.AddRow(r)
|
||||
}
|
||||
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// Enabled returns true if any underlying Config is Enabled.
|
||||
func (c Configs) Enabled() bool {
|
||||
for _, cc := range c {
|
||||
if cc.Enabled {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
170
vendor/github.com/influxdata/influxdb/services/graphite/config_test.go
generated
vendored
Normal file
170
vendor/github.com/influxdata/influxdb/services/graphite/config_test.go
generated
vendored
Normal file
@@ -0,0 +1,170 @@
|
||||
package graphite_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/influxdata/influxdb/services/graphite"
|
||||
)
|
||||
|
||||
func TestConfig_Parse(t *testing.T) {
|
||||
// Parse configuration.
|
||||
var c graphite.Config
|
||||
if _, err := toml.Decode(`
|
||||
bind-address = ":8080"
|
||||
database = "mydb"
|
||||
retention-policy = "myrp"
|
||||
enabled = true
|
||||
protocol = "tcp"
|
||||
batch-size=100
|
||||
batch-pending=77
|
||||
batch-timeout="1s"
|
||||
consistency-level="one"
|
||||
templates=["servers.* .host.measurement*"]
|
||||
tags=["region=us-east"]
|
||||
`, &c); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Validate configuration.
|
||||
if c.BindAddress != ":8080" {
|
||||
t.Fatalf("unexpected bind address: %s", c.BindAddress)
|
||||
} else if c.Database != "mydb" {
|
||||
t.Fatalf("unexpected database selected: %s", c.Database)
|
||||
} else if c.RetentionPolicy != "myrp" {
|
||||
t.Fatalf("unexpected retention policy selected: %s", c.RetentionPolicy)
|
||||
} else if c.Enabled != true {
|
||||
t.Fatalf("unexpected graphite enabled: %v", c.Enabled)
|
||||
} else if c.Protocol != "tcp" {
|
||||
t.Fatalf("unexpected graphite protocol: %s", c.Protocol)
|
||||
} else if c.BatchSize != 100 {
|
||||
t.Fatalf("unexpected graphite batch size: %d", c.BatchSize)
|
||||
} else if c.BatchPending != 77 {
|
||||
t.Fatalf("unexpected graphite batch pending: %d", c.BatchPending)
|
||||
} else if time.Duration(c.BatchTimeout) != time.Second {
|
||||
t.Fatalf("unexpected graphite batch timeout: %v", c.BatchTimeout)
|
||||
} else if c.ConsistencyLevel != "one" {
|
||||
t.Fatalf("unexpected graphite consistency setting: %s", c.ConsistencyLevel)
|
||||
}
|
||||
|
||||
if len(c.Templates) != 1 && c.Templates[0] != "servers.* .host.measurement*" {
|
||||
t.Fatalf("unexpected graphite templates setting: %v", c.Templates)
|
||||
}
|
||||
if len(c.Tags) != 1 && c.Tags[0] != "regsion=us-east" {
|
||||
t.Fatalf("unexpected graphite templates setting: %v", c.Tags)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigValidateEmptyTemplate(t *testing.T) {
|
||||
c := &graphite.Config{}
|
||||
c.Templates = []string{""}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Templates = []string{" "}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigValidateTooManyField(t *testing.T) {
|
||||
c := &graphite.Config{}
|
||||
c.Templates = []string{"a measurement b c"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigValidateTemplatePatterns(t *testing.T) {
|
||||
c := &graphite.Config{}
|
||||
c.Templates = []string{"*measurement"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Templates = []string{".host.region"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigValidateFilter(t *testing.T) {
|
||||
c := &graphite.Config{}
|
||||
c.Templates = []string{".server measurement*"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Templates = []string{". .server measurement*"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Templates = []string{"server* measurement*"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigValidateTemplateTags(t *testing.T) {
|
||||
c := &graphite.Config{}
|
||||
c.Templates = []string{"*.server measurement* foo"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Templates = []string{"*.server measurement* foo=bar="}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Templates = []string{"*.server measurement* foo=bar,"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Templates = []string{"*.server measurement* ="}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigValidateDefaultTags(t *testing.T) {
|
||||
c := &graphite.Config{}
|
||||
c.Tags = []string{"foo"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Tags = []string{"foo=bar="}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Tags = []string{"foo=bar", ""}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
c.Tags = []string{"="}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigValidateFilterDuplicates(t *testing.T) {
|
||||
c := &graphite.Config{}
|
||||
c.Templates = []string{"foo measurement*", "foo .host.measurement"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
// duplicate default templates
|
||||
c.Templates = []string{"measurement*", ".host.measurement"}
|
||||
if err := c.Validate(); err == nil {
|
||||
t.Errorf("config validate expected error. got nil")
|
||||
}
|
||||
|
||||
}
|
14
vendor/github.com/influxdata/influxdb/services/graphite/errors.go
generated
vendored
Normal file
14
vendor/github.com/influxdata/influxdb/services/graphite/errors.go
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
package graphite
|
||||
|
||||
import "fmt"
|
||||
|
||||
// An UnsupportedValueError is returned when a parsed value is not
|
||||
// supported.
|
||||
type UnsupportedValueError struct {
|
||||
Field string
|
||||
Value float64
|
||||
}
|
||||
|
||||
func (err *UnsupportedValueError) Error() string {
|
||||
return fmt.Sprintf(`field "%s" value: "%v" is unsupported`, err.Field, err.Value)
|
||||
}
|
422
vendor/github.com/influxdata/influxdb/services/graphite/parser.go
generated
vendored
Normal file
422
vendor/github.com/influxdata/influxdb/services/graphite/parser.go
generated
vendored
Normal file
@@ -0,0 +1,422 @@
|
||||
package graphite
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/influxdb/models"
|
||||
)
|
||||
|
||||
// Minimum and maximum supported dates for timestamps.
|
||||
var (
|
||||
// The minimum graphite timestamp allowed.
|
||||
MinDate = time.Date(1901, 12, 13, 0, 0, 0, 0, time.UTC)
|
||||
|
||||
// The maximum graphite timestamp allowed.
|
||||
MaxDate = time.Date(2038, 1, 19, 0, 0, 0, 0, time.UTC)
|
||||
)
|
||||
|
||||
var defaultTemplate *template
|
||||
|
||||
func init() {
|
||||
var err error
|
||||
defaultTemplate, err = NewTemplate("measurement*", nil, DefaultSeparator)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Parser encapsulates a Graphite Parser.
|
||||
type Parser struct {
|
||||
matcher *matcher
|
||||
tags models.Tags
|
||||
}
|
||||
|
||||
// Options are configurable values that can be provided to a Parser.
|
||||
type Options struct {
|
||||
Separator string
|
||||
Templates []string
|
||||
DefaultTags models.Tags
|
||||
}
|
||||
|
||||
// NewParserWithOptions returns a graphite parser using the given options.
|
||||
func NewParserWithOptions(options Options) (*Parser, error) {
|
||||
|
||||
matcher := newMatcher()
|
||||
matcher.AddDefaultTemplate(defaultTemplate)
|
||||
|
||||
for _, pattern := range options.Templates {
|
||||
|
||||
template := pattern
|
||||
filter := ""
|
||||
// Format is [filter] <template> [tag1=value1,tag2=value2]
|
||||
parts := strings.Fields(pattern)
|
||||
if len(parts) < 1 {
|
||||
continue
|
||||
} else if len(parts) >= 2 {
|
||||
if strings.Contains(parts[1], "=") {
|
||||
template = parts[0]
|
||||
} else {
|
||||
filter = parts[0]
|
||||
template = parts[1]
|
||||
}
|
||||
}
|
||||
|
||||
// Parse out the default tags specific to this template
|
||||
var tags models.Tags
|
||||
if strings.Contains(parts[len(parts)-1], "=") {
|
||||
tagStrs := strings.Split(parts[len(parts)-1], ",")
|
||||
for _, kv := range tagStrs {
|
||||
parts := strings.Split(kv, "=")
|
||||
tags.SetString(parts[0], parts[1])
|
||||
}
|
||||
}
|
||||
|
||||
tmpl, err := NewTemplate(template, tags, options.Separator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
matcher.Add(filter, tmpl)
|
||||
}
|
||||
return &Parser{matcher: matcher, tags: options.DefaultTags}, nil
|
||||
}
|
||||
|
||||
// NewParser returns a GraphiteParser instance.
|
||||
func NewParser(templates []string, defaultTags models.Tags) (*Parser, error) {
|
||||
return NewParserWithOptions(
|
||||
Options{
|
||||
Templates: templates,
|
||||
DefaultTags: defaultTags,
|
||||
Separator: DefaultSeparator,
|
||||
})
|
||||
}
|
||||
|
||||
// Parse performs Graphite parsing of a single line.
|
||||
func (p *Parser) Parse(line string) (models.Point, error) {
|
||||
// Break into 3 fields (name, value, timestamp).
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) != 2 && len(fields) != 3 {
|
||||
return nil, fmt.Errorf("received %q which doesn't have required fields", line)
|
||||
}
|
||||
|
||||
// decode the name and tags
|
||||
template := p.matcher.Match(fields[0])
|
||||
measurement, tags, field, err := template.Apply(fields[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Could not extract measurement, use the raw value
|
||||
if measurement == "" {
|
||||
measurement = fields[0]
|
||||
}
|
||||
|
||||
// Parse value.
|
||||
v, err := strconv.ParseFloat(fields[1], 64)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`field "%s" value: %s`, fields[0], err)
|
||||
}
|
||||
|
||||
if math.IsNaN(v) || math.IsInf(v, 0) {
|
||||
return nil, &UnsupportedValueError{Field: fields[0], Value: v}
|
||||
}
|
||||
|
||||
fieldValues := map[string]interface{}{}
|
||||
if field != "" {
|
||||
fieldValues[field] = v
|
||||
} else {
|
||||
fieldValues["value"] = v
|
||||
}
|
||||
|
||||
// If no 3rd field, use now as timestamp
|
||||
timestamp := time.Now().UTC()
|
||||
|
||||
if len(fields) == 3 {
|
||||
// Parse timestamp.
|
||||
unixTime, err := strconv.ParseFloat(fields[2], 64)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`field "%s" time: %s`, fields[0], err)
|
||||
}
|
||||
|
||||
// -1 is a special value that gets converted to current UTC time
|
||||
// See https://github.com/graphite-project/carbon/issues/54
|
||||
if unixTime != float64(-1) {
|
||||
// Check if we have fractional seconds
|
||||
timestamp = time.Unix(int64(unixTime), int64((unixTime-math.Floor(unixTime))*float64(time.Second)))
|
||||
if timestamp.Before(MinDate) || timestamp.After(MaxDate) {
|
||||
return nil, fmt.Errorf("timestamp out of range")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Set the default tags on the point if they are not already set
|
||||
for _, t := range p.tags {
|
||||
if _, ok := tags[string(t.Key)]; !ok {
|
||||
tags[string(t.Key)] = string(t.Value)
|
||||
}
|
||||
}
|
||||
return models.NewPoint(measurement, models.NewTags(tags), fieldValues, timestamp)
|
||||
}
|
||||
|
||||
// ApplyTemplate extracts the template fields from the given line and
|
||||
// returns the measurement name and tags.
|
||||
func (p *Parser) ApplyTemplate(line string) (string, map[string]string, string, error) {
|
||||
// Break line into fields (name, value, timestamp), only name is used
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) == 0 {
|
||||
return "", make(map[string]string), "", nil
|
||||
}
|
||||
// decode the name and tags
|
||||
template := p.matcher.Match(fields[0])
|
||||
name, tags, field, err := template.Apply(fields[0])
|
||||
// Set the default tags on the point if they are not already set
|
||||
for _, t := range p.tags {
|
||||
if _, ok := tags[string(t.Key)]; !ok {
|
||||
tags[string(t.Key)] = string(t.Value)
|
||||
}
|
||||
}
|
||||
return name, tags, field, err
|
||||
}
|
||||
|
||||
// template represents a pattern and tags to map a graphite metric string to a influxdb Point.
|
||||
type template struct {
|
||||
tags []string
|
||||
defaultTags models.Tags
|
||||
greedyMeasurement bool
|
||||
separator string
|
||||
}
|
||||
|
||||
// NewTemplate returns a new template ensuring it has a measurement
|
||||
// specified.
|
||||
func NewTemplate(pattern string, defaultTags models.Tags, separator string) (*template, error) {
|
||||
tags := strings.Split(pattern, ".")
|
||||
hasMeasurement := false
|
||||
template := &template{tags: tags, defaultTags: defaultTags, separator: separator}
|
||||
|
||||
for _, tag := range tags {
|
||||
if strings.HasPrefix(tag, "measurement") {
|
||||
hasMeasurement = true
|
||||
}
|
||||
if tag == "measurement*" {
|
||||
template.greedyMeasurement = true
|
||||
}
|
||||
}
|
||||
|
||||
if !hasMeasurement {
|
||||
return nil, fmt.Errorf("no measurement specified for template. %q", pattern)
|
||||
}
|
||||
|
||||
return template, nil
|
||||
}
|
||||
|
||||
// Apply extracts the template fields from the given line and returns the measurement
|
||||
// name and tags.
|
||||
func (t *template) Apply(line string) (string, map[string]string, string, error) {
|
||||
fields := strings.Split(line, ".")
|
||||
var (
|
||||
measurement []string
|
||||
tags = make(map[string][]string)
|
||||
field string
|
||||
hasFieldWildcard = false
|
||||
hasMeasurementWildcard = false
|
||||
)
|
||||
|
||||
// Set any default tags
|
||||
for _, t := range t.defaultTags {
|
||||
tags[string(t.Key)] = append(tags[string(t.Key)], string(t.Value))
|
||||
}
|
||||
|
||||
// See if an invalid combination has been specified in the template:
|
||||
for _, tag := range t.tags {
|
||||
if tag == "measurement*" {
|
||||
hasMeasurementWildcard = true
|
||||
} else if tag == "field*" {
|
||||
hasFieldWildcard = true
|
||||
}
|
||||
}
|
||||
if hasFieldWildcard && hasMeasurementWildcard {
|
||||
return "", nil, "", fmt.Errorf("either 'field*' or 'measurement*' can be used in each template (but not both together): %q", strings.Join(t.tags, t.separator))
|
||||
}
|
||||
|
||||
for i, tag := range t.tags {
|
||||
if i >= len(fields) {
|
||||
continue
|
||||
}
|
||||
|
||||
if tag == "measurement" {
|
||||
measurement = append(measurement, fields[i])
|
||||
} else if tag == "field" {
|
||||
if len(field) != 0 {
|
||||
return "", nil, "", fmt.Errorf("'field' can only be used once in each template: %q", line)
|
||||
}
|
||||
field = fields[i]
|
||||
} else if tag == "field*" {
|
||||
field = strings.Join(fields[i:], t.separator)
|
||||
break
|
||||
} else if tag == "measurement*" {
|
||||
measurement = append(measurement, fields[i:]...)
|
||||
break
|
||||
} else if tag != "" {
|
||||
tags[tag] = append(tags[tag], fields[i])
|
||||
}
|
||||
}
|
||||
|
||||
// Convert to map of strings.
|
||||
out_tags := make(map[string]string)
|
||||
for k, values := range tags {
|
||||
out_tags[k] = strings.Join(values, t.separator)
|
||||
}
|
||||
|
||||
return strings.Join(measurement, t.separator), out_tags, field, nil
|
||||
}
|
||||
|
||||
// matcher determines which template should be applied to a given metric
|
||||
// based on a filter tree.
|
||||
type matcher struct {
|
||||
root *node
|
||||
defaultTemplate *template
|
||||
}
|
||||
|
||||
func newMatcher() *matcher {
|
||||
return &matcher{
|
||||
root: &node{},
|
||||
}
|
||||
}
|
||||
|
||||
// Add inserts the template in the filter tree based the given filter.
|
||||
func (m *matcher) Add(filter string, template *template) {
|
||||
if filter == "" {
|
||||
m.AddDefaultTemplate(template)
|
||||
return
|
||||
}
|
||||
m.root.Insert(filter, template)
|
||||
}
|
||||
|
||||
func (m *matcher) AddDefaultTemplate(template *template) {
|
||||
m.defaultTemplate = template
|
||||
}
|
||||
|
||||
// Match returns the template that matches the given graphite line.
|
||||
func (m *matcher) Match(line string) *template {
|
||||
tmpl := m.root.Search(line)
|
||||
if tmpl != nil {
|
||||
return tmpl
|
||||
}
|
||||
|
||||
return m.defaultTemplate
|
||||
}
|
||||
|
||||
// node is an item in a sorted k-ary tree. Each child is sorted by its value.
|
||||
// The special value of "*", is always last.
|
||||
type node struct {
|
||||
value string
|
||||
children nodes
|
||||
template *template
|
||||
}
|
||||
|
||||
func (n *node) insert(values []string, template *template) {
|
||||
// Add the end, set the template
|
||||
if len(values) == 0 {
|
||||
n.template = template
|
||||
return
|
||||
}
|
||||
|
||||
// See if the the current element already exists in the tree. If so, insert the
|
||||
// into that sub-tree
|
||||
for _, v := range n.children {
|
||||
if v.value == values[0] {
|
||||
v.insert(values[1:], template)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// New element, add it to the tree and sort the children
|
||||
newNode := &node{value: values[0]}
|
||||
n.children = append(n.children, newNode)
|
||||
sort.Sort(&n.children)
|
||||
|
||||
// Inherit template if value is wildcard
|
||||
if values[0] == "*" {
|
||||
newNode.template = n.template
|
||||
}
|
||||
|
||||
// Now insert the rest of the tree into the new element
|
||||
newNode.insert(values[1:], template)
|
||||
}
|
||||
|
||||
// Insert inserts the given string template into the tree. The filter string is separated
|
||||
// on "." and each part is used as the path in the tree.
|
||||
func (n *node) Insert(filter string, template *template) {
|
||||
n.insert(strings.Split(filter, "."), template)
|
||||
}
|
||||
|
||||
func (n *node) search(lineParts []string) *template {
|
||||
// Nothing to search
|
||||
if len(lineParts) == 0 || len(n.children) == 0 {
|
||||
return n.template
|
||||
}
|
||||
|
||||
// If last element is a wildcard, don't include in this search since it's sorted
|
||||
// to the end but lexicographically it would not always be and sort.Search assumes
|
||||
// the slice is sorted.
|
||||
length := len(n.children)
|
||||
if n.children[length-1].value == "*" {
|
||||
length--
|
||||
}
|
||||
|
||||
// Find the index of child with an exact match
|
||||
i := sort.Search(length, func(i int) bool {
|
||||
return n.children[i].value >= lineParts[0]
|
||||
})
|
||||
|
||||
// Found an exact match, so search that child sub-tree
|
||||
if i < len(n.children) && n.children[i].value == lineParts[0] {
|
||||
return n.children[i].search(lineParts[1:])
|
||||
}
|
||||
// Not an exact match, see if we have a wildcard child to search
|
||||
if n.children[len(n.children)-1].value == "*" {
|
||||
return n.children[len(n.children)-1].search(lineParts[1:])
|
||||
}
|
||||
return n.template
|
||||
}
|
||||
|
||||
func (n *node) Search(line string) *template {
|
||||
return n.search(strings.Split(line, "."))
|
||||
}
|
||||
|
||||
type nodes []*node
|
||||
|
||||
// Less returns a boolean indicating whether the filter at position j
|
||||
// is less than the filter at position k. Filters are order by string
|
||||
// comparison of each component parts. A wildcard value "*" is never
|
||||
// less than a non-wildcard value.
|
||||
//
|
||||
// For example, the filters:
|
||||
// "*.*"
|
||||
// "servers.*"
|
||||
// "servers.localhost"
|
||||
// "*.localhost"
|
||||
//
|
||||
// Would be sorted as:
|
||||
// "servers.localhost"
|
||||
// "servers.*"
|
||||
// "*.localhost"
|
||||
// "*.*"
|
||||
func (n *nodes) Less(j, k int) bool {
|
||||
if (*n)[j].value == "*" && (*n)[k].value != "*" {
|
||||
return false
|
||||
}
|
||||
|
||||
if (*n)[j].value != "*" && (*n)[k].value == "*" {
|
||||
return true
|
||||
}
|
||||
|
||||
return (*n)[j].value < (*n)[k].value
|
||||
}
|
||||
|
||||
func (n *nodes) Swap(i, j int) { (*n)[i], (*n)[j] = (*n)[j], (*n)[i] }
|
||||
func (n *nodes) Len() int { return len(*n) }
|
724
vendor/github.com/influxdata/influxdb/services/graphite/parser_test.go
generated
vendored
Normal file
724
vendor/github.com/influxdata/influxdb/services/graphite/parser_test.go
generated
vendored
Normal file
@@ -0,0 +1,724 @@
|
||||
package graphite_test
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/influxdb/models"
|
||||
"github.com/influxdata/influxdb/services/graphite"
|
||||
)
|
||||
|
||||
func BenchmarkParse(b *testing.B) {
|
||||
p, err := graphite.NewParser([]string{
|
||||
"*.* .wrong.measurement*",
|
||||
"servers.* .host.measurement*",
|
||||
"servers.localhost .host.measurement*",
|
||||
"*.localhost .host.measurement*",
|
||||
"*.*.cpu .host.measurement*",
|
||||
"a.b.c .host.measurement*",
|
||||
"influxd.*.foo .host.measurement*",
|
||||
"prod.*.mem .host.measurement*",
|
||||
}, nil)
|
||||
|
||||
if err != nil {
|
||||
b.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
p.Parse("servers.localhost.cpu.load 11 1435077219")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTemplateApply(t *testing.T) {
|
||||
var tests = []struct {
|
||||
test string
|
||||
input string
|
||||
template string
|
||||
measurement string
|
||||
tags map[string]string
|
||||
err string
|
||||
}{
|
||||
{
|
||||
test: "metric only",
|
||||
input: "cpu",
|
||||
template: "measurement",
|
||||
measurement: "cpu",
|
||||
},
|
||||
{
|
||||
test: "metric with single series",
|
||||
input: "cpu.server01",
|
||||
template: "measurement.hostname",
|
||||
measurement: "cpu",
|
||||
tags: map[string]string{"hostname": "server01"},
|
||||
},
|
||||
{
|
||||
test: "metric with multiple series",
|
||||
input: "cpu.us-west.server01",
|
||||
template: "measurement.region.hostname",
|
||||
measurement: "cpu",
|
||||
tags: map[string]string{"hostname": "server01", "region": "us-west"},
|
||||
},
|
||||
{
|
||||
test: "metric with multiple tags",
|
||||
input: "server01.example.org.cpu.us-west",
|
||||
template: "hostname.hostname.hostname.measurement.region",
|
||||
measurement: "cpu",
|
||||
tags: map[string]string{"hostname": "server01.example.org", "region": "us-west"},
|
||||
},
|
||||
{
|
||||
test: "no metric",
|
||||
tags: make(map[string]string),
|
||||
err: `no measurement specified for template. ""`,
|
||||
},
|
||||
{
|
||||
test: "ignore unnamed",
|
||||
input: "foo.cpu",
|
||||
template: "measurement",
|
||||
measurement: "foo",
|
||||
tags: make(map[string]string),
|
||||
},
|
||||
{
|
||||
test: "name shorter than template",
|
||||
input: "foo",
|
||||
template: "measurement.A.B.C",
|
||||
measurement: "foo",
|
||||
tags: make(map[string]string),
|
||||
},
|
||||
{
|
||||
test: "wildcard measurement at end",
|
||||
input: "prod.us-west.server01.cpu.load",
|
||||
template: "env.zone.host.measurement*",
|
||||
measurement: "cpu.load",
|
||||
tags: map[string]string{"env": "prod", "zone": "us-west", "host": "server01"},
|
||||
},
|
||||
{
|
||||
test: "skip fields",
|
||||
input: "ignore.us-west.ignore-this-too.cpu.load",
|
||||
template: ".zone..measurement*",
|
||||
measurement: "cpu.load",
|
||||
tags: map[string]string{"zone": "us-west"},
|
||||
},
|
||||
{
|
||||
test: "conjoined fields",
|
||||
input: "prod.us-west.server01.cpu.util.idle.percent",
|
||||
template: "env.zone.host.measurement.measurement.field*",
|
||||
measurement: "cpu.util",
|
||||
tags: map[string]string{"env": "prod", "zone": "us-west", "host": "server01"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
tmpl, err := graphite.NewTemplate(test.template, nil, graphite.DefaultSeparator)
|
||||
if errstr(err) != test.err {
|
||||
t.Fatalf("err does not match. expected %v, got %v", test.err, err)
|
||||
}
|
||||
if err != nil {
|
||||
// If we erred out,it was intended and the following tests won't work
|
||||
continue
|
||||
}
|
||||
|
||||
measurement, tags, _, _ := tmpl.Apply(test.input)
|
||||
if measurement != test.measurement {
|
||||
t.Fatalf("name parse failer. expected %v, got %v", test.measurement, measurement)
|
||||
}
|
||||
if len(tags) != len(test.tags) {
|
||||
t.Fatalf("unexpected number of tags. expected %v, got %v", test.tags, tags)
|
||||
}
|
||||
for k, v := range test.tags {
|
||||
if tags[k] != v {
|
||||
t.Fatalf("unexpected tag value for tags[%s]. expected %q, got %q", k, v, tags[k])
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseMissingMeasurement(t *testing.T) {
|
||||
_, err := graphite.NewParser([]string{"a.b.c"}, nil)
|
||||
if err == nil {
|
||||
t.Fatalf("expected error creating parser, got nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse(t *testing.T) {
|
||||
testTime := time.Now().Round(time.Second)
|
||||
epochTime := testTime.Unix()
|
||||
strTime := strconv.FormatInt(epochTime, 10)
|
||||
|
||||
var tests = []struct {
|
||||
test string
|
||||
input string
|
||||
measurement string
|
||||
tags map[string]string
|
||||
value float64
|
||||
time time.Time
|
||||
template string
|
||||
err string
|
||||
}{
|
||||
{
|
||||
test: "normal case",
|
||||
input: `cpu.foo.bar 50 ` + strTime,
|
||||
template: "measurement.foo.bar",
|
||||
measurement: "cpu",
|
||||
tags: map[string]string{
|
||||
"foo": "foo",
|
||||
"bar": "bar",
|
||||
},
|
||||
value: 50,
|
||||
time: testTime,
|
||||
},
|
||||
{
|
||||
test: "metric only with float value",
|
||||
input: `cpu 50.554 ` + strTime,
|
||||
measurement: "cpu",
|
||||
template: "measurement",
|
||||
value: 50.554,
|
||||
time: testTime,
|
||||
},
|
||||
{
|
||||
test: "missing metric",
|
||||
input: `1419972457825`,
|
||||
template: "measurement",
|
||||
err: `received "1419972457825" which doesn't have required fields`,
|
||||
},
|
||||
{
|
||||
test: "should error parsing invalid float",
|
||||
input: `cpu 50.554z 1419972457825`,
|
||||
template: "measurement",
|
||||
err: `field "cpu" value: strconv.ParseFloat: parsing "50.554z": invalid syntax`,
|
||||
},
|
||||
{
|
||||
test: "should error parsing invalid int",
|
||||
input: `cpu 50z 1419972457825`,
|
||||
template: "measurement",
|
||||
err: `field "cpu" value: strconv.ParseFloat: parsing "50z": invalid syntax`,
|
||||
},
|
||||
{
|
||||
test: "should error parsing invalid time",
|
||||
input: `cpu 50.554 14199724z57825`,
|
||||
template: "measurement",
|
||||
err: `field "cpu" time: strconv.ParseFloat: parsing "14199724z57825": invalid syntax`,
|
||||
},
|
||||
{
|
||||
test: "measurement* and field* (invalid)",
|
||||
input: `prod.us-west.server01.cpu.util.idle.percent 99.99 1419972457825`,
|
||||
template: "env.zone.host.measurement*.field*",
|
||||
err: `either 'field*' or 'measurement*' can be used in each template (but not both together): "env.zone.host.measurement*.field*"`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
p, err := graphite.NewParser([]string{test.template}, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating graphite parser: %v", err)
|
||||
}
|
||||
|
||||
point, err := p.Parse(test.input)
|
||||
if errstr(err) != test.err {
|
||||
t.Fatalf("err does not match. expected %v, got %v", test.err, err)
|
||||
}
|
||||
if err != nil {
|
||||
// If we erred out,it was intended and the following tests won't work
|
||||
continue
|
||||
}
|
||||
if string(point.Name()) != test.measurement {
|
||||
t.Fatalf("name parse failer. expected %v, got %v", test.measurement, string(point.Name()))
|
||||
}
|
||||
if len(point.Tags()) != len(test.tags) {
|
||||
t.Fatalf("tags len mismatch. expected %d, got %d", len(test.tags), len(point.Tags()))
|
||||
}
|
||||
fields, err := point.Fields()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f := fields["value"].(float64)
|
||||
if fields["value"] != f {
|
||||
t.Fatalf("floatValue value mismatch. expected %v, got %v", test.value, f)
|
||||
}
|
||||
if point.Time().UnixNano()/1000000 != test.time.UnixNano()/1000000 {
|
||||
t.Fatalf("time value mismatch. expected %v, got %v", test.time.UnixNano(), point.Time().UnixNano())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseNaN(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"measurement*"}, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
_, err = p.Parse("servers.localhost.cpu_load NaN 1435077219")
|
||||
if err == nil {
|
||||
t.Fatalf("expected error. got nil")
|
||||
}
|
||||
|
||||
if _, ok := err.(*graphite.UnsupportedValueError); !ok {
|
||||
t.Fatalf("expected *graphite.ErrUnsupportedValue, got %v", reflect.TypeOf(err))
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterMatchDefault(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"servers.localhost .host.measurement*"}, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("miss.servers.localhost.cpu_load",
|
||||
models.NewTags(map[string]string{}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("miss.servers.localhost.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterMatchMultipleMeasurement(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"servers.localhost .host.measurement.measurement*"}, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu.cpu_load.10",
|
||||
models.NewTags(map[string]string{"host": "localhost"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu.cpu_load.10 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterMatchMultipleMeasurementSeparator(t *testing.T) {
|
||||
p, err := graphite.NewParserWithOptions(graphite.Options{
|
||||
Templates: []string{"servers.localhost .host.measurement.measurement*"},
|
||||
Separator: "_",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_cpu_load_10",
|
||||
models.NewTags(map[string]string{"host": "localhost"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu.cpu_load.10 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterMatchSingle(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"servers.localhost .host.measurement*"}, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_load",
|
||||
models.NewTags(map[string]string{"host": "localhost"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseNoMatch(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"servers.*.cpu .host.measurement.cpu.measurement"}, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("servers.localhost.memory.VmallocChunk",
|
||||
models.NewTags(map[string]string{}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.memory.VmallocChunk 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterMatchWildcard(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"servers.* .host.measurement*"}, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_load",
|
||||
models.NewTags(map[string]string{"host": "localhost"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterMatchExactBeforeWildcard(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{
|
||||
"servers.* .wrong.measurement*",
|
||||
"servers.localhost .host.measurement*"}, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_load",
|
||||
models.NewTags(map[string]string{"host": "localhost"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterMatchMostLongestFilter(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{
|
||||
"*.* .wrong.measurement*",
|
||||
"servers.* .wrong.measurement*",
|
||||
"servers.localhost .wrong.measurement*",
|
||||
"servers.localhost.cpu .host.resource.measurement*", // should match this
|
||||
"*.localhost .wrong.measurement*",
|
||||
}, nil)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_load",
|
||||
models.NewTags(map[string]string{"host": "localhost", "resource": "cpu"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterMatchMultipleWildcards(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{
|
||||
"*.* .wrong.measurement*",
|
||||
"servers.* .host.measurement*", // should match this
|
||||
"servers.localhost .wrong.measurement*",
|
||||
"*.localhost .wrong.measurement*",
|
||||
}, nil)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_load",
|
||||
models.NewTags(map[string]string{"host": "server01"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.server01.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseDefaultTags(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"servers.localhost .host.measurement*"}, models.NewTags(map[string]string{
|
||||
"region": "us-east",
|
||||
"zone": "1c",
|
||||
"host": "should not set",
|
||||
}))
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_load",
|
||||
models.NewTags(map[string]string{"host": "localhost", "region": "us-east", "zone": "1c"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseDefaultTemplateTags(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"servers.localhost .host.measurement* zone=1c"}, models.NewTags(map[string]string{
|
||||
"region": "us-east",
|
||||
"host": "should not set",
|
||||
}))
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_load",
|
||||
models.NewTags(map[string]string{"host": "localhost", "region": "us-east", "zone": "1c"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseDefaultTemplateTagsOverridGlobal(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"servers.localhost .host.measurement* zone=1c,region=us-east"}, models.NewTags(map[string]string{
|
||||
"region": "shot not be set",
|
||||
"host": "should not set",
|
||||
}))
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_load",
|
||||
models.NewTags(map[string]string{"host": "localhost", "region": "us-east", "zone": "1c"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTemplateWhitespace(t *testing.T) {
|
||||
p, err := graphite.NewParser([]string{"servers.localhost .host.measurement* zone=1c"}, models.NewTags(map[string]string{
|
||||
"region": "us-east",
|
||||
"host": "should not set",
|
||||
}))
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
exp := models.MustNewPoint("cpu_load",
|
||||
models.NewTags(map[string]string{"host": "localhost", "region": "us-east", "zone": "1c"}),
|
||||
models.Fields{"value": float64(11)},
|
||||
time.Unix(1435077219, 0))
|
||||
|
||||
pt, err := p.Parse("servers.localhost.cpu_load 11 1435077219")
|
||||
if err != nil {
|
||||
t.Fatalf("parse error: %v", err)
|
||||
}
|
||||
|
||||
if exp.String() != pt.String() {
|
||||
t.Errorf("parse mismatch: got %v, exp %v", pt.String(), exp.String())
|
||||
}
|
||||
}
|
||||
|
||||
// Test basic functionality of ApplyTemplate
|
||||
func TestApplyTemplate(t *testing.T) {
|
||||
o := graphite.Options{
|
||||
Separator: "_",
|
||||
Templates: []string{"current.* measurement.measurement"},
|
||||
}
|
||||
p, err := graphite.NewParserWithOptions(o)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
measurement, _, _, _ := p.ApplyTemplate("current.users")
|
||||
if measurement != "current_users" {
|
||||
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s",
|
||||
measurement, "current_users")
|
||||
}
|
||||
}
|
||||
|
||||
// Test basic functionality of ApplyTemplate
|
||||
func TestApplyTemplateNoMatch(t *testing.T) {
|
||||
o := graphite.Options{
|
||||
Separator: "_",
|
||||
Templates: []string{"foo.bar measurement.measurement"},
|
||||
}
|
||||
p, err := graphite.NewParserWithOptions(o)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
measurement, _, _, _ := p.ApplyTemplate("current.users")
|
||||
if measurement != "current.users" {
|
||||
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s",
|
||||
measurement, "current.users")
|
||||
}
|
||||
}
|
||||
|
||||
// Test that most specific template is chosen
|
||||
func TestApplyTemplateSpecific(t *testing.T) {
|
||||
o := graphite.Options{
|
||||
Separator: "_",
|
||||
Templates: []string{
|
||||
"current.* measurement.measurement",
|
||||
"current.*.* measurement.measurement.service",
|
||||
},
|
||||
}
|
||||
p, err := graphite.NewParserWithOptions(o)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
measurement, tags, _, _ := p.ApplyTemplate("current.users.facebook")
|
||||
if measurement != "current_users" {
|
||||
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s",
|
||||
measurement, "current_users")
|
||||
}
|
||||
service, ok := tags["service"]
|
||||
if !ok {
|
||||
t.Error("Expected for template to apply a 'service' tag, but not found")
|
||||
}
|
||||
if service != "facebook" {
|
||||
t.Errorf("Expected service='facebook' tag, got service='%s'", service)
|
||||
}
|
||||
}
|
||||
|
||||
// Test that most specific template is N/A
|
||||
func TestApplyTemplateSpecificIsNA(t *testing.T) {
|
||||
o := graphite.Options{
|
||||
Separator: "_",
|
||||
Templates: []string{
|
||||
"current.* measurement.service",
|
||||
"current.*.*.test measurement.measurement.service",
|
||||
},
|
||||
}
|
||||
p, err := graphite.NewParserWithOptions(o)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
measurement, _, _, _ := p.ApplyTemplate("current.users.facebook")
|
||||
if measurement != "current" {
|
||||
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s",
|
||||
measurement, "current")
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyTemplateTags(t *testing.T) {
|
||||
o := graphite.Options{
|
||||
Separator: "_",
|
||||
Templates: []string{"current.* measurement.measurement region=us-west"},
|
||||
}
|
||||
p, err := graphite.NewParserWithOptions(o)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
measurement, tags, _, _ := p.ApplyTemplate("current.users")
|
||||
if measurement != "current_users" {
|
||||
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s",
|
||||
measurement, "current_users")
|
||||
}
|
||||
|
||||
region, ok := tags["region"]
|
||||
if !ok {
|
||||
t.Error("Expected for template to apply a 'region' tag, but not found")
|
||||
}
|
||||
if region != "us-west" {
|
||||
t.Errorf("Expected region='us-west' tag, got region='%s'", region)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyTemplateField(t *testing.T) {
|
||||
o := graphite.Options{
|
||||
Separator: "_",
|
||||
Templates: []string{"current.* measurement.measurement.field"},
|
||||
}
|
||||
p, err := graphite.NewParserWithOptions(o)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
measurement, _, field, err := p.ApplyTemplate("current.users.logged_in")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if measurement != "current_users" {
|
||||
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s",
|
||||
measurement, "current_users")
|
||||
}
|
||||
|
||||
if field != "logged_in" {
|
||||
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s",
|
||||
field, "logged_in")
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyTemplateFieldError(t *testing.T) {
|
||||
o := graphite.Options{
|
||||
Separator: "_",
|
||||
Templates: []string{"current.* measurement.field.field"},
|
||||
}
|
||||
p, err := graphite.NewParserWithOptions(o)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error creating parser, got %v", err)
|
||||
}
|
||||
|
||||
_, _, _, err = p.ApplyTemplate("current.users.logged_in")
|
||||
if err == nil {
|
||||
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s", err,
|
||||
"'field' can only be used once in each template: current.users.logged_in")
|
||||
}
|
||||
}
|
||||
|
||||
// Test Helpers
|
||||
func errstr(err error) string {
|
||||
if err != nil {
|
||||
return err.Error()
|
||||
}
|
||||
return ""
|
||||
}
|
474
vendor/github.com/influxdata/influxdb/services/graphite/service.go
generated
vendored
Normal file
474
vendor/github.com/influxdata/influxdb/services/graphite/service.go
generated
vendored
Normal file
@@ -0,0 +1,474 @@
|
||||
// Package graphite provides a service for InfluxDB to ingest data via the graphite protocol.
|
||||
package graphite // import "github.com/influxdata/influxdb/services/graphite"
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"math"
|
||||
"net"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/influxdb/models"
|
||||
"github.com/influxdata/influxdb/monitor/diagnostics"
|
||||
"github.com/influxdata/influxdb/services/meta"
|
||||
"github.com/influxdata/influxdb/tsdb"
|
||||
"github.com/uber-go/zap"
|
||||
)
|
||||
|
||||
const udpBufferSize = 65536
|
||||
|
||||
// statistics gathered by the graphite package.
|
||||
const (
|
||||
statPointsReceived = "pointsRx"
|
||||
statBytesReceived = "bytesRx"
|
||||
statPointsParseFail = "pointsParseFail"
|
||||
statPointsNaNFail = "pointsNaNFail"
|
||||
statBatchesTransmitted = "batchesTx"
|
||||
statPointsTransmitted = "pointsTx"
|
||||
statBatchesTransmitFail = "batchesTxFail"
|
||||
statConnectionsActive = "connsActive"
|
||||
statConnectionsHandled = "connsHandled"
|
||||
)
|
||||
|
||||
type tcpConnection struct {
|
||||
conn net.Conn
|
||||
connectTime time.Time
|
||||
}
|
||||
|
||||
func (c *tcpConnection) Close() {
|
||||
c.conn.Close()
|
||||
}
|
||||
|
||||
// Service represents a Graphite service.
|
||||
type Service struct {
|
||||
bindAddress string
|
||||
database string
|
||||
retentionPolicy string
|
||||
protocol string
|
||||
batchSize int
|
||||
batchPending int
|
||||
batchTimeout time.Duration
|
||||
udpReadBuffer int
|
||||
|
||||
batcher *tsdb.PointBatcher
|
||||
parser *Parser
|
||||
|
||||
logger zap.Logger
|
||||
stats *Statistics
|
||||
defaultTags models.StatisticTags
|
||||
|
||||
tcpConnectionsMu sync.Mutex
|
||||
tcpConnections map[string]*tcpConnection
|
||||
diagsKey string
|
||||
|
||||
ln net.Listener
|
||||
addr net.Addr
|
||||
udpConn *net.UDPConn
|
||||
|
||||
wg sync.WaitGroup
|
||||
|
||||
mu sync.RWMutex
|
||||
ready bool // Has the required database been created?
|
||||
done chan struct{} // Is the service closing or closed?
|
||||
|
||||
Monitor interface {
|
||||
RegisterDiagnosticsClient(name string, client diagnostics.Client)
|
||||
DeregisterDiagnosticsClient(name string)
|
||||
}
|
||||
PointsWriter interface {
|
||||
WritePointsPrivileged(database, retentionPolicy string, consistencyLevel models.ConsistencyLevel, points []models.Point) error
|
||||
}
|
||||
MetaClient interface {
|
||||
CreateDatabaseWithRetentionPolicy(name string, spec *meta.RetentionPolicySpec) (*meta.DatabaseInfo, error)
|
||||
CreateRetentionPolicy(database string, spec *meta.RetentionPolicySpec, makeDefault bool) (*meta.RetentionPolicyInfo, error)
|
||||
Database(name string) *meta.DatabaseInfo
|
||||
RetentionPolicy(database, name string) (*meta.RetentionPolicyInfo, error)
|
||||
}
|
||||
}
|
||||
|
||||
// NewService returns an instance of the Graphite service.
|
||||
func NewService(c Config) (*Service, error) {
|
||||
// Use defaults where necessary.
|
||||
d := c.WithDefaults()
|
||||
|
||||
s := Service{
|
||||
bindAddress: d.BindAddress,
|
||||
database: d.Database,
|
||||
retentionPolicy: d.RetentionPolicy,
|
||||
protocol: d.Protocol,
|
||||
batchSize: d.BatchSize,
|
||||
batchPending: d.BatchPending,
|
||||
udpReadBuffer: d.UDPReadBuffer,
|
||||
batchTimeout: time.Duration(d.BatchTimeout),
|
||||
logger: zap.New(zap.NullEncoder()),
|
||||
stats: &Statistics{},
|
||||
defaultTags: models.StatisticTags{"proto": d.Protocol, "bind": d.BindAddress},
|
||||
tcpConnections: make(map[string]*tcpConnection),
|
||||
diagsKey: strings.Join([]string{"graphite", d.Protocol, d.BindAddress}, ":"),
|
||||
}
|
||||
|
||||
parser, err := NewParserWithOptions(Options{
|
||||
Templates: d.Templates,
|
||||
DefaultTags: d.DefaultTags(),
|
||||
Separator: d.Separator})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s.parser = parser
|
||||
|
||||
return &s, nil
|
||||
}
|
||||
|
||||
// Open starts the Graphite input processing data.
|
||||
func (s *Service) Open() error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if !s.closed() {
|
||||
return nil // Already open.
|
||||
}
|
||||
s.done = make(chan struct{})
|
||||
|
||||
s.logger.Info(fmt.Sprintf("Starting graphite service, batch size %d, batch timeout %s", s.batchSize, s.batchTimeout))
|
||||
|
||||
// Register diagnostics if a Monitor service is available.
|
||||
if s.Monitor != nil {
|
||||
s.Monitor.RegisterDiagnosticsClient(s.diagsKey, s)
|
||||
}
|
||||
|
||||
s.batcher = tsdb.NewPointBatcher(s.batchSize, s.batchPending, s.batchTimeout)
|
||||
s.batcher.Start()
|
||||
|
||||
// Start processing batches.
|
||||
s.wg.Add(1)
|
||||
go s.processBatches(s.batcher)
|
||||
|
||||
var err error
|
||||
if strings.ToLower(s.protocol) == "tcp" {
|
||||
s.addr, err = s.openTCPServer()
|
||||
} else if strings.ToLower(s.protocol) == "udp" {
|
||||
s.addr, err = s.openUDPServer()
|
||||
} else {
|
||||
return fmt.Errorf("unrecognized Graphite input protocol %s", s.protocol)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.logger.Info(fmt.Sprintf("Listening on %s: %s", strings.ToUpper(s.protocol), s.addr.String()))
|
||||
return nil
|
||||
}
|
||||
func (s *Service) closeAllConnections() {
|
||||
s.tcpConnectionsMu.Lock()
|
||||
defer s.tcpConnectionsMu.Unlock()
|
||||
for _, c := range s.tcpConnections {
|
||||
c.Close()
|
||||
}
|
||||
}
|
||||
|
||||
// Close stops all data processing on the Graphite input.
|
||||
func (s *Service) Close() error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if s.closed() {
|
||||
return nil // Already closed.
|
||||
}
|
||||
close(s.done)
|
||||
|
||||
s.closeAllConnections()
|
||||
|
||||
if s.ln != nil {
|
||||
s.ln.Close()
|
||||
}
|
||||
if s.udpConn != nil {
|
||||
s.udpConn.Close()
|
||||
}
|
||||
|
||||
if s.batcher != nil {
|
||||
s.batcher.Stop()
|
||||
}
|
||||
|
||||
if s.Monitor != nil {
|
||||
s.Monitor.DeregisterDiagnosticsClient(s.diagsKey)
|
||||
}
|
||||
|
||||
s.wg.Wait()
|
||||
s.done = nil
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Closed returns true if the service is currently closed.
|
||||
func (s *Service) Closed() bool {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
return s.closed()
|
||||
}
|
||||
|
||||
func (s *Service) closed() bool {
|
||||
select {
|
||||
case <-s.done:
|
||||
// Service is closing.
|
||||
return true
|
||||
default:
|
||||
}
|
||||
return s.done == nil
|
||||
}
|
||||
|
||||
// createInternalStorage ensures that the required database has been created.
|
||||
func (s *Service) createInternalStorage() error {
|
||||
s.mu.RLock()
|
||||
ready := s.ready
|
||||
s.mu.RUnlock()
|
||||
if ready {
|
||||
return nil
|
||||
}
|
||||
|
||||
if db := s.MetaClient.Database(s.database); db != nil {
|
||||
if rp, _ := s.MetaClient.RetentionPolicy(s.database, s.retentionPolicy); rp == nil {
|
||||
spec := meta.RetentionPolicySpec{Name: s.retentionPolicy}
|
||||
if _, err := s.MetaClient.CreateRetentionPolicy(s.database, &spec, true); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
spec := meta.RetentionPolicySpec{Name: s.retentionPolicy}
|
||||
if _, err := s.MetaClient.CreateDatabaseWithRetentionPolicy(s.database, &spec); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// The service is now ready.
|
||||
s.mu.Lock()
|
||||
s.ready = true
|
||||
s.mu.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// WithLogger sets the logger on the service.
|
||||
func (s *Service) WithLogger(log zap.Logger) {
|
||||
s.logger = log.With(
|
||||
zap.String("service", "graphite"),
|
||||
zap.String("addr", s.bindAddress),
|
||||
)
|
||||
}
|
||||
|
||||
// Statistics maintains statistics for the graphite service.
|
||||
type Statistics struct {
|
||||
PointsReceived int64
|
||||
BytesReceived int64
|
||||
PointsParseFail int64
|
||||
PointsNaNFail int64
|
||||
BatchesTransmitted int64
|
||||
PointsTransmitted int64
|
||||
BatchesTransmitFail int64
|
||||
ActiveConnections int64
|
||||
HandledConnections int64
|
||||
}
|
||||
|
||||
// Statistics returns statistics for periodic monitoring.
|
||||
func (s *Service) Statistics(tags map[string]string) []models.Statistic {
|
||||
return []models.Statistic{{
|
||||
Name: "graphite",
|
||||
Tags: s.defaultTags.Merge(tags),
|
||||
Values: map[string]interface{}{
|
||||
statPointsReceived: atomic.LoadInt64(&s.stats.PointsReceived),
|
||||
statBytesReceived: atomic.LoadInt64(&s.stats.BytesReceived),
|
||||
statPointsParseFail: atomic.LoadInt64(&s.stats.PointsParseFail),
|
||||
statPointsNaNFail: atomic.LoadInt64(&s.stats.PointsNaNFail),
|
||||
statBatchesTransmitted: atomic.LoadInt64(&s.stats.BatchesTransmitted),
|
||||
statPointsTransmitted: atomic.LoadInt64(&s.stats.PointsTransmitted),
|
||||
statBatchesTransmitFail: atomic.LoadInt64(&s.stats.BatchesTransmitFail),
|
||||
statConnectionsActive: atomic.LoadInt64(&s.stats.ActiveConnections),
|
||||
statConnectionsHandled: atomic.LoadInt64(&s.stats.HandledConnections),
|
||||
},
|
||||
}}
|
||||
}
|
||||
|
||||
// Addr returns the address the Service binds to.
|
||||
func (s *Service) Addr() net.Addr {
|
||||
return s.addr
|
||||
}
|
||||
|
||||
// openTCPServer opens the Graphite input in TCP mode and starts processing data.
|
||||
func (s *Service) openTCPServer() (net.Addr, error) {
|
||||
ln, err := net.Listen("tcp", s.bindAddress)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s.ln = ln
|
||||
|
||||
s.wg.Add(1)
|
||||
go func() {
|
||||
defer s.wg.Done()
|
||||
for {
|
||||
conn, err := s.ln.Accept()
|
||||
if opErr, ok := err.(*net.OpError); ok && !opErr.Temporary() {
|
||||
s.logger.Info("graphite TCP listener closed")
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
s.logger.Info("error accepting TCP connection", zap.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
s.wg.Add(1)
|
||||
go s.handleTCPConnection(conn)
|
||||
}
|
||||
}()
|
||||
return ln.Addr(), nil
|
||||
}
|
||||
|
||||
// handleTCPConnection services an individual TCP connection for the Graphite input.
|
||||
func (s *Service) handleTCPConnection(conn net.Conn) {
|
||||
defer s.wg.Done()
|
||||
defer conn.Close()
|
||||
defer atomic.AddInt64(&s.stats.ActiveConnections, -1)
|
||||
defer s.untrackConnection(conn)
|
||||
atomic.AddInt64(&s.stats.ActiveConnections, 1)
|
||||
atomic.AddInt64(&s.stats.HandledConnections, 1)
|
||||
s.trackConnection(conn)
|
||||
|
||||
reader := bufio.NewReader(conn)
|
||||
|
||||
for {
|
||||
// Read up to the next newline.
|
||||
buf, err := reader.ReadBytes('\n')
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Trim the buffer, even though there should be no padding
|
||||
line := strings.TrimSpace(string(buf))
|
||||
|
||||
atomic.AddInt64(&s.stats.PointsReceived, 1)
|
||||
atomic.AddInt64(&s.stats.BytesReceived, int64(len(buf)))
|
||||
s.handleLine(line)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) trackConnection(c net.Conn) {
|
||||
s.tcpConnectionsMu.Lock()
|
||||
defer s.tcpConnectionsMu.Unlock()
|
||||
s.tcpConnections[c.RemoteAddr().String()] = &tcpConnection{
|
||||
conn: c,
|
||||
connectTime: time.Now().UTC(),
|
||||
}
|
||||
}
|
||||
func (s *Service) untrackConnection(c net.Conn) {
|
||||
s.tcpConnectionsMu.Lock()
|
||||
defer s.tcpConnectionsMu.Unlock()
|
||||
delete(s.tcpConnections, c.RemoteAddr().String())
|
||||
}
|
||||
|
||||
// openUDPServer opens the Graphite input in UDP mode and starts processing incoming data.
|
||||
func (s *Service) openUDPServer() (net.Addr, error) {
|
||||
addr, err := net.ResolveUDPAddr("udp", s.bindAddress)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.udpConn, err = net.ListenUDP("udp", addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if s.udpReadBuffer != 0 {
|
||||
err = s.udpConn.SetReadBuffer(s.udpReadBuffer)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to set UDP read buffer to %d: %s",
|
||||
s.udpReadBuffer, err)
|
||||
}
|
||||
}
|
||||
|
||||
buf := make([]byte, udpBufferSize)
|
||||
s.wg.Add(1)
|
||||
go func() {
|
||||
defer s.wg.Done()
|
||||
for {
|
||||
n, _, err := s.udpConn.ReadFromUDP(buf)
|
||||
if err != nil {
|
||||
s.udpConn.Close()
|
||||
return
|
||||
}
|
||||
|
||||
lines := strings.Split(string(buf[:n]), "\n")
|
||||
for _, line := range lines {
|
||||
s.handleLine(line)
|
||||
}
|
||||
atomic.AddInt64(&s.stats.PointsReceived, int64(len(lines)))
|
||||
atomic.AddInt64(&s.stats.BytesReceived, int64(n))
|
||||
}
|
||||
}()
|
||||
return s.udpConn.LocalAddr(), nil
|
||||
}
|
||||
|
||||
func (s *Service) handleLine(line string) {
|
||||
if line == "" {
|
||||
return
|
||||
}
|
||||
|
||||
// Parse it.
|
||||
point, err := s.parser.Parse(line)
|
||||
if err != nil {
|
||||
switch err := err.(type) {
|
||||
case *UnsupportedValueError:
|
||||
// Graphite ignores NaN values with no error.
|
||||
if math.IsNaN(err.Value) {
|
||||
atomic.AddInt64(&s.stats.PointsNaNFail, 1)
|
||||
return
|
||||
}
|
||||
}
|
||||
s.logger.Info(fmt.Sprintf("unable to parse line: %s: %s", line, err))
|
||||
atomic.AddInt64(&s.stats.PointsParseFail, 1)
|
||||
return
|
||||
}
|
||||
|
||||
s.batcher.In() <- point
|
||||
}
|
||||
|
||||
// processBatches continually drains the given batcher and writes the batches to the database.
|
||||
func (s *Service) processBatches(batcher *tsdb.PointBatcher) {
|
||||
defer s.wg.Done()
|
||||
for {
|
||||
select {
|
||||
case batch := <-batcher.Out():
|
||||
// Will attempt to create database if not yet created.
|
||||
if err := s.createInternalStorage(); err != nil {
|
||||
s.logger.Info(fmt.Sprintf("Required database or retention policy do not yet exist: %s", err.Error()))
|
||||
continue
|
||||
}
|
||||
|
||||
if err := s.PointsWriter.WritePointsPrivileged(s.database, s.retentionPolicy, models.ConsistencyLevelAny, batch); err == nil {
|
||||
atomic.AddInt64(&s.stats.BatchesTransmitted, 1)
|
||||
atomic.AddInt64(&s.stats.PointsTransmitted, int64(len(batch)))
|
||||
} else {
|
||||
s.logger.Info(fmt.Sprintf("failed to write point batch to database %q: %s", s.database, err))
|
||||
atomic.AddInt64(&s.stats.BatchesTransmitFail, 1)
|
||||
}
|
||||
|
||||
case <-s.done:
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Diagnostics returns diagnostics of the graphite service.
|
||||
func (s *Service) Diagnostics() (*diagnostics.Diagnostics, error) {
|
||||
s.tcpConnectionsMu.Lock()
|
||||
defer s.tcpConnectionsMu.Unlock()
|
||||
|
||||
d := &diagnostics.Diagnostics{
|
||||
Columns: []string{"local", "remote", "connect time"},
|
||||
Rows: make([][]interface{}, 0, len(s.tcpConnections)),
|
||||
}
|
||||
for _, v := range s.tcpConnections {
|
||||
d.Rows = append(d.Rows, []interface{}{v.conn.LocalAddr().String(), v.conn.RemoteAddr().String(), v.connectTime})
|
||||
}
|
||||
return d, nil
|
||||
}
|
309
vendor/github.com/influxdata/influxdb/services/graphite/service_test.go
generated
vendored
Normal file
309
vendor/github.com/influxdata/influxdb/services/graphite/service_test.go
generated
vendored
Normal file
@@ -0,0 +1,309 @@
|
||||
package graphite
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/influxdb/internal"
|
||||
"github.com/influxdata/influxdb/models"
|
||||
"github.com/influxdata/influxdb/services/meta"
|
||||
"github.com/influxdata/influxdb/toml"
|
||||
"github.com/uber-go/zap"
|
||||
)
|
||||
|
||||
func Test_Service_OpenClose(t *testing.T) {
|
||||
// Let the OS assign a random port since we are only opening and closing the service,
|
||||
// not actually connecting to it.
|
||||
c := Config{BindAddress: "127.0.0.1:0"}
|
||||
service := NewTestService(&c)
|
||||
|
||||
// Closing a closed service is fine.
|
||||
if err := service.Service.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Closing a closed service again is fine.
|
||||
if err := service.Service.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := service.Service.Open(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Opening an already open service is fine.
|
||||
if err := service.Service.Open(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Reopening a previously opened service is fine.
|
||||
if err := service.Service.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := service.Service.Open(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Tidy up.
|
||||
if err := service.Service.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_CreatesDatabase(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := NewTestService(nil)
|
||||
s.WritePointsFn = func(string, string, models.ConsistencyLevel, []models.Point) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
called := make(chan struct{})
|
||||
s.MetaClient.CreateDatabaseWithRetentionPolicyFn = func(name string, _ *meta.RetentionPolicySpec) (*meta.DatabaseInfo, error) {
|
||||
if name != s.Service.database {
|
||||
t.Errorf("\n\texp = %s\n\tgot = %s\n", s.Service.database, name)
|
||||
}
|
||||
// Allow some time for the caller to return and the ready status to
|
||||
// be set.
|
||||
time.AfterFunc(10*time.Millisecond, func() { called <- struct{}{} })
|
||||
return nil, errors.New("an error")
|
||||
}
|
||||
|
||||
if err := s.Service.Open(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
points, err := models.ParsePointsString(`cpu value=1`)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s.Service.batcher.In() <- points[0] // Send a point.
|
||||
s.Service.batcher.Flush()
|
||||
select {
|
||||
case <-called:
|
||||
// OK
|
||||
case <-time.NewTimer(5 * time.Second).C:
|
||||
t.Fatal("Service should have attempted to create database")
|
||||
}
|
||||
|
||||
// ready status should not have been switched due to meta client error.
|
||||
s.Service.mu.RLock()
|
||||
ready := s.Service.ready
|
||||
s.Service.mu.RUnlock()
|
||||
|
||||
if got, exp := ready, false; got != exp {
|
||||
t.Fatalf("got %v, expected %v", got, exp)
|
||||
}
|
||||
|
||||
// This time MC won't cause an error.
|
||||
s.MetaClient.CreateDatabaseWithRetentionPolicyFn = func(name string, _ *meta.RetentionPolicySpec) (*meta.DatabaseInfo, error) {
|
||||
// Allow some time for the caller to return and the ready status to
|
||||
// be set.
|
||||
time.AfterFunc(10*time.Millisecond, func() { called <- struct{}{} })
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
s.Service.batcher.In() <- points[0] // Send a point.
|
||||
s.Service.batcher.Flush()
|
||||
select {
|
||||
case <-called:
|
||||
// OK
|
||||
case <-time.NewTimer(5 * time.Second).C:
|
||||
t.Fatal("Service should have attempted to create database")
|
||||
}
|
||||
|
||||
// ready status should now be true.
|
||||
s.Service.mu.RLock()
|
||||
ready = s.Service.ready
|
||||
s.Service.mu.RUnlock()
|
||||
|
||||
if got, exp := ready, true; got != exp {
|
||||
t.Fatalf("got %v, expected %v", got, exp)
|
||||
}
|
||||
|
||||
s.Service.Close()
|
||||
}
|
||||
|
||||
func Test_Service_TCP(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
now := time.Now().UTC().Round(time.Second)
|
||||
|
||||
config := Config{}
|
||||
config.Database = "graphitedb"
|
||||
config.BatchSize = 0 // No batching.
|
||||
config.BatchTimeout = toml.Duration(time.Second)
|
||||
config.BindAddress = ":0"
|
||||
|
||||
service := NewTestService(&config)
|
||||
|
||||
// Allow test to wait until points are written.
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
|
||||
service.WritePointsFn = func(database, retentionPolicy string, consistencyLevel models.ConsistencyLevel, points []models.Point) error {
|
||||
defer wg.Done()
|
||||
|
||||
pt, _ := models.NewPoint(
|
||||
"cpu",
|
||||
models.NewTags(map[string]string{}),
|
||||
map[string]interface{}{"value": 23.456},
|
||||
time.Unix(now.Unix(), 0))
|
||||
|
||||
if database != "graphitedb" {
|
||||
t.Fatalf("unexpected database: %s", database)
|
||||
} else if retentionPolicy != "" {
|
||||
t.Fatalf("unexpected retention policy: %s", retentionPolicy)
|
||||
} else if len(points) != 1 {
|
||||
t.Fatalf("expected 1 point, got %d", len(points))
|
||||
} else if points[0].String() != pt.String() {
|
||||
t.Fatalf("expected point %v, got %v", pt.String(), points[0].String())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := service.Service.Open(); err != nil {
|
||||
t.Fatalf("failed to open Graphite service: %s", err.Error())
|
||||
}
|
||||
|
||||
// Connect to the graphite endpoint we just spun up
|
||||
_, port, _ := net.SplitHostPort(service.Service.Addr().String())
|
||||
conn, err := net.Dial("tcp", "127.0.0.1:"+port)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
data := []byte(`cpu 23.456 `)
|
||||
data = append(data, []byte(fmt.Sprintf("%d", now.Unix()))...)
|
||||
data = append(data, '\n')
|
||||
data = append(data, []byte(`memory NaN `)...)
|
||||
data = append(data, []byte(fmt.Sprintf("%d", now.Unix()))...)
|
||||
data = append(data, '\n')
|
||||
_, err = conn.Write(data)
|
||||
conn.Close()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func Test_Service_UDP(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
now := time.Now().UTC().Round(time.Second)
|
||||
|
||||
config := Config{}
|
||||
config.Database = "graphitedb"
|
||||
config.BatchSize = 0 // No batching.
|
||||
config.BatchTimeout = toml.Duration(time.Second)
|
||||
config.BindAddress = ":10000"
|
||||
config.Protocol = "udp"
|
||||
|
||||
service := NewTestService(&config)
|
||||
|
||||
// Allow test to wait until points are written.
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
|
||||
service.WritePointsFn = func(database, retentionPolicy string, consistencyLevel models.ConsistencyLevel, points []models.Point) error {
|
||||
defer wg.Done()
|
||||
|
||||
pt, _ := models.NewPoint(
|
||||
"cpu",
|
||||
models.NewTags(map[string]string{}),
|
||||
map[string]interface{}{"value": 23.456},
|
||||
time.Unix(now.Unix(), 0))
|
||||
if database != "graphitedb" {
|
||||
t.Fatalf("unexpected database: %s", database)
|
||||
} else if retentionPolicy != "" {
|
||||
t.Fatalf("unexpected retention policy: %s", retentionPolicy)
|
||||
} else if points[0].String() != pt.String() {
|
||||
t.Fatalf("unexpected points: %#v", points[0].String())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := service.Service.Open(); err != nil {
|
||||
t.Fatalf("failed to open Graphite service: %s", err.Error())
|
||||
}
|
||||
|
||||
// Connect to the graphite endpoint we just spun up
|
||||
_, port, _ := net.SplitHostPort(service.Service.Addr().String())
|
||||
conn, err := net.Dial("udp", "127.0.0.1:"+port)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
data := []byte(`cpu 23.456 `)
|
||||
data = append(data, []byte(fmt.Sprintf("%d", now.Unix()))...)
|
||||
data = append(data, '\n')
|
||||
_, err = conn.Write(data)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
conn.Close()
|
||||
}
|
||||
|
||||
type TestService struct {
|
||||
Service *Service
|
||||
MetaClient *internal.MetaClientMock
|
||||
WritePointsFn func(database, retentionPolicy string, consistencyLevel models.ConsistencyLevel, points []models.Point) error
|
||||
}
|
||||
|
||||
func NewTestService(c *Config) *TestService {
|
||||
if c == nil {
|
||||
defaultC := NewConfig()
|
||||
c = &defaultC
|
||||
}
|
||||
|
||||
gservice, err := NewService(*c)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
service := &TestService{
|
||||
Service: gservice,
|
||||
MetaClient: &internal.MetaClientMock{},
|
||||
}
|
||||
|
||||
service.MetaClient.CreateRetentionPolicyFn = func(string, *meta.RetentionPolicySpec, bool) (*meta.RetentionPolicyInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
service.MetaClient.CreateDatabaseWithRetentionPolicyFn = func(string, *meta.RetentionPolicySpec) (*meta.DatabaseInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
service.MetaClient.DatabaseFn = func(string) *meta.DatabaseInfo {
|
||||
return nil
|
||||
}
|
||||
|
||||
service.MetaClient.RetentionPolicyFn = func(string, string) (*meta.RetentionPolicyInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if testing.Verbose() {
|
||||
service.Service.WithLogger(zap.New(
|
||||
zap.NewTextEncoder(),
|
||||
zap.Output(os.Stderr),
|
||||
))
|
||||
}
|
||||
|
||||
// Set the Meta Client and PointsWriter.
|
||||
service.Service.MetaClient = service.MetaClient
|
||||
service.Service.PointsWriter = service
|
||||
|
||||
return service
|
||||
}
|
||||
|
||||
func (s *TestService) WritePointsPrivileged(database, retentionPolicy string, consistencyLevel models.ConsistencyLevel, points []models.Point) error {
|
||||
return s.WritePointsFn(database, retentionPolicy, consistencyLevel, points)
|
||||
}
|
Reference in New Issue
Block a user