1
0
mirror of https://github.com/Oxalide/vsphere-influxdb-go.git synced 2023-10-10 11:36:51 +00:00

add vendoring with go dep

This commit is contained in:
Adrian Todorov
2017-10-25 20:52:40 +00:00
parent 704f4d20d1
commit a59409f16b
1627 changed files with 489673 additions and 0 deletions

View File

@@ -0,0 +1,164 @@
# Influx Stress tool -> `v2`
The design of the new stress tool was designed to:
* have higher potential write throughput than previous version
* have more schema expressibility for testing different load profiles and professional services
* have more granular reporting to be better able to draw conclusions from tests
In service of these requirements we designed a language that looks a lot like `influxql` to give the new test commands. Instead of a configuration file, the new stress test takes a list of these `Statements`.
The tool has the following components:
* Parser - parses the configuration file and turns it into an `[]Statement`. All code related to the parser is in `v2/stressql/`. The parser was designed as per @benbjohnson's great article on [parsers in go](https://blog.gopheracademy.com/advent-2014/parsers-lexers/).
* Statements - perform operations on target instance or change test environment. All code related to statements is in `v2/statement/`. The following are the available statements:
- `EXEC` - Still a TODO, planned to run outside scripts from the config file.
- `GO` - Prepend to an `INSERT` or `QUERY` statement to run concurrently.
- `INFLUXQL` - All valid `influxql` will be passed directly to the targeted instance. Useful for setting up complex downsampling environments or just your testing environment.
- `INSERT` - Generates points following a template
- `QUERY` - Runs a given query or generates sample queries given a companion `INSERT` statement
- `SET` - Changes the test parameters. Defaults are listed in the `README.md`
- `WAIT` - Required after a `GO` statement. Blocks till all proceeding statements finish.
* Clients - The statement, results and InfluxDB clients. This code lives in `v2/stress_client`
- `StressTest` - The `Statement` client. Also contains the results client.
- `stressClient` - A performant InfluxDB client. Makes `GET /query` and `POST /write` requests. Forwards the results to the results client.
![Influx Stress Design](./influx_stress_v2.png)
### Statements
`Statement` is an interface defined in `v2/statement/statement.go`:
```go
type Statement interface {
Run(s *stressClient.StressTest)
Report(s *stressClient.StressTest) string
SetID(s string)
}
```
* `Run` prompts the statement to carry out it's instructions. See the run functions of the various statements listed above for more information.
* `Report` retrieves and collates all recorded test data from the reporting InfluxDB instance.
* `SetID` gives the statement an ID. Used in the parser. Each `statementID` is an 8 character random string used for reporting.
### `Statement` -> `StressTest`
`Statement`s send `Package`s (queries or writes to the target database) or `Directives` (for changing test state) through the `StressTest` to the `stressClient` where they are processed.
```go
// v2/stress_client/package.go
// T is Query or Write
// StatementID is for reporting
type Package struct {
T Type
Body []byte
StatementID string
Tracer *Tracer
}
// v2/stress_client/directive.go
// Property is test state variable to change
// Value is the new value
type Directive struct {
Property string
Value string
Tracer *Tracer
}
```
The `Tracer` on both of these packages contains a `sync.WaitGroup` that prevents `Statement`s from returning before all their operations are finished. This `WaitGroup` is incremented in the `Run()` of the statement and decremented in `*StressTest.resultsListen()` after results are recorded in the database. This is well documented with inline comments. `Tracer`s also carry optional tags for reporting purposes.
```go
// v2/stress_client/tracer.go
type Tracer struct {
Tags map[string]string
sync.WaitGroup
}
```
### `StressTest`
The `StressTest` is the client for the statements through the `*StressTest.SendPackage()` and `*StressTest.SendDirective()` functions. It also contains some test state and the `ResultsClient`.
```go
type StressTest struct {
TestID string
TestName string
Precision string
StartDate string
BatchSize int
sync.WaitGroup
sync.Mutex
packageChan chan<- Package
directiveChan chan<- Directive
ResultsChan chan Response
communes map[string]*commune
ResultsClient influx.Client
}
```
### Reporting Client
The `ResultsClient` turns raw responses from InfluxDB into properly tagged points containing any relevant information for storage in another InfluxDB instance. The code for creating those points lives in `v2/stress_client/reporting.go`
### InfluxDB Instance (reporting)
This is `localhost:8086` by default. The results are currently stored in the `_stressTest` database.
### `stressClient`
An InfluxDB client designed for speed. `stressClient` also holds most test state.
```go
// v2/stress_client/stress_client.go
type stressClient struct {
testID string
// State for the Stress Test
addresses []string
precision string
startDate string
database string
wdelay string
qdelay string
// Channels from statements
packageChan <-chan Package
directiveChan <-chan Directive
// Response channel
responseChan chan<- Response
// Concurrency utilities
sync.WaitGroup
sync.Mutex
// Concurrency Limit for Writes and Reads
wconc int
qconc int
// Manage Read and Write concurrency separately
wc *ConcurrencyLimiter
rc *ConcurrencyLimiter
}
```
Code for handling the write path is in `v2/stress_client/stress_client_write.go` while the query path is in `v2/stress_client/stress_client_query.go`.
### InfluxDB Instance (stress test target)
The InfluxDB which is being put under stress.
### response data
`Response`s carry points from `stressClient` to the `ResultsClient`.
```go
// v2/stress_client/response.go
type Response struct {
Point *influx.Point
Tracer *Tracer
}
```

View File

@@ -0,0 +1,177 @@
# Influx Stress Tool V2
```
$ influx_stress -v2 -config iql/file.iql
```
This stress tool works from list of InfluxQL-esque statements. The language has been extended to allow for some basic templating of fields, tags and measurements in both line protocol and query statements.
By default the test outputs a human readable report to `STDOUT` and records test statistics in an active installation of InfluxDB at `localhost:8086`.
To set state variables for the test such as the address of the Influx node use the following syntax:
```
# The values listed below are the default values for each of the parameters
# Pipe delineated list of addresses. For cluster: [192.168.0.10:8086|192.168.0.2:8086|192.168.0.3:8086]
# Queries and writes are round-robin to the configured addresses.
SET Addresses [localhost:8086]
# False (default) uses http, true uses https
SET SSL [false]
# Username for targeted influx server or cluster
SET Username []
# Password for targeted influx server or cluster
SET Password []
# Database to target for queries and writes. Works like the InfluxCLI USE
SET Database [stress]
# Precision for the data being written
# Only s and ns supported
SET Precision [s]
# Date the first written point will be timestamped
SET StartDate [2016-01-01]
# Size of batches to send to InfluxDB
SET BatchSize [5000]
# Time to wait between sending batches
SET WriteInterval [0s]
# Time to wait between sending queries
SET QueryInterval [0s]
# Number of concurrent writers
SET WriteConcurrency [15]
# Number of concurrent readers
SET QueryConcurrency [5]
```
The values in the example are also the defaults.
Valid line protocol will be forwarded right to the server making setting up your testing environment easy:
```
CREATE DATABASE thing
ALTER RETENTION POLICY default ON thing DURATION 1h REPLICATION 1
SET database [thing]
```
You can write points like this:
```
INSERT mockCpu
cpu,
host=server-[int inc(0) 10000],location=[string rand(8) 1000]
value=[float rand(1000) 0]
100000 10s
Explained:
# INSERT keyword kicks off the statement, next to it is the name of the statement for reporting and templated query generation
INSERT mockCpu
# Measurement
cpu,
# Tags - separated by commas. Tag values can be templates, mixed template and fixed values
host=server-[float rand(100) 10000],location=[int inc(0) 1000],fixed=[fix|fid|dor|pom|another_tag_value]
# Fields - separated by commas either templates, mixed template and fixed values
value=[float inc(0) 0]
# 'Timestamp' - Number of points to insert into this measurement and the amount of time between points
100000 10s
```
Each template contains 3 parts: a datatype (`str`, `float`, or `int`) a function which describes how the value changes between points: `inc(0)` is increasing and `rand(n)` is a random number between `0` and `n`. The last number is the number of unique values in the tag or field. `0` is unbounded. To make a tag
To run multiple insert statements at once:
```
GO INSERT devices
devices,
city=[str rand(8) 10],country=[str rand(8) 25],device_id=[str rand(10) 1000]
lat=[float rand(90) 0],lng=[float rand(120) 0],temp=[float rand(40) 0]
10000000 10s
GO INSERT devices2
devices2,
city=[str rand(8) 10],country=[str rand(8) 25],device_id=[str rand(10) 1000]
lat=[float rand(90) 0],lng=[float rand(120) 0],temp=[float rand(40) 0]
10000000 10s
WAIT
```
Fastest point generation and write load requires 3-4 running `GO INSERT` statements at a time.
You can run queries like this:
```
QUERY cpu
SELECT mean(value) FROM cpu WHERE host='server-1'
DO 1000
```
### Output:
Output for config file in this repo:
```
[√] "CREATE DATABASE thing" -> 1.806785ms
[√] "CREATE DATABASE thing2" -> 1.492504ms
SET Database = 'thing'
SET Precision = 's'
Go Write Statement: mockCpu
Points/Sec: 245997
Resp Time Average: 173.354445ms
Resp Time Standard Deviation: 123.80344ms
95th Percentile Write Response: 381.363503ms
Average Request Bytes: 276110
Successful Write Reqs: 20
Retries: 0
Go Query Statement: mockCpu
Resp Time Average: 3.140803ms
Resp Time Standard Deviation: 2.292328ms
95th Percentile Read Response: 5.915437ms
Query Resp Bytes Average: 16 bytes
Successful Queries: 10
WAIT -> 406.400059ms
SET DATABASE = 'thing2'
Go Write Statement: devices
Points/Sec: 163348
Resp Time Average: 132.553789ms
Resp Time Standard Deviation: 149.397972ms
95th Percentile Write Response: 567.987467ms
Average Request Bytes: 459999
Successful Write Reqs: 20
Retries: 0
Go Write Statement: devices2
Points/Sec: 160078
Resp Time Average: 133.303097ms
Resp Time Standard Deviation: 144.352404ms
95th Percentile Write Response: 560.565066ms
Average Request Bytes: 464999
Successful Write Reqs: 20
Retries: 0
Go Query Statement: fooName
Resp Time Average: 1.3307ms
Resp Time Standard Deviation: 640.249µs
95th Percentile Read Response: 2.668ms
Query Resp Bytes Average: 16 bytes
Successful Queries: 10
WAIT -> 624.585319ms
[√] "DROP DATABASE thing" -> 991.088464ms
[√] "DROP DATABASE thing2" -> 421.362831ms
```
### Next Steps:
##### Documentation
- Parser behavior and proper `.iql` syntax
- How the templated query generation works
- Collection of tested `.iql` files to simulate different loads
##### Performance
- `Commune`, a stuct to enable templated Query generation, is blocking writes when used, look into performance.
- Templated query generation is currently in a quazi-working state. See the above point.

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

View File

@@ -0,0 +1,13 @@
CREATE DATABASE stress
GO INSERT cpu
cpu,
host=server-[int inc(0) 100000],location=us-west
value=[int rand(100) 0]
10000000 10s
GO QUERY cpu
SELECT count(value) FROM cpu WHERE %t
DO 250
WAIT

View File

@@ -0,0 +1,45 @@
CREATE DATABASE thing
CREATE DATABASE thing2
SET Database [thing]
SET Precision [s]
GO INSERT mockCpu
cpu,
host=server-[float inc(0) 10000],loc=[us-west|us-east|eu-north]
value=[int inc(100) 0]
100000 10s
GO QUERY mockCpu
SELECT mean(value) FROM cpu WHERE host='server-1'
DO 10
WAIT
SET DATABASE [thing2]
GO INSERT devices
devices,
city=[str rand(8) 100],country=[str rand(8) 25],device_id=[str rand(10) 100]
lat=[float rand(90) 0],lng=[float rand(120) 0],temp=[float rand(40) 0]
100000 10s
GO INSERT devices2
devices2,
city=[str rand(8) 100],country=[str rand(8) 25],device_id=[str rand(10) 100]
lat=[float rand(90) 0],lng=[float rand(120) 0],temp=[float rand(40) 0]
100000 10s
GO QUERY fooName
SELECT count(temp) FROM devices WHERE temp > 30
DO 10
WAIT
DROP DATABASE thing
DROP DATABASE thing2
WAIT

View File

@@ -0,0 +1,59 @@
package stress
import (
"fmt"
"log"
"time"
influx "github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/influxdb/stress/v2/stress_client"
"github.com/influxdata/influxdb/stress/v2/stressql"
)
// RunStress takes a configFile and kicks off the stress test
func RunStress(file string) {
// Spin up the Client
s := stressClient.NewStressTest()
// Parse the file into Statements
stmts, err := stressql.ParseStatements(file)
// Log parse errors and quit if found
if err != nil {
log.Fatalf("Parsing Error\n error: %v\n", err)
}
// Run all statements
for _, stmt := range stmts {
stmt.Run(s)
}
// Clear out the batch of unsent response points
resp := blankResponse()
s.ResultsChan <- resp
resp.Tracer.Wait()
// Compile all Reports
for _, stmt := range stmts {
fmt.Println(stmt.Report(s))
}
}
func blankResponse() stressClient.Response {
// Points must have at least one field
fields := map[string]interface{}{"done": true}
// Make a 'blank' point
p, err := influx.NewPoint("done", make(map[string]string), fields, time.Now())
// Panic on error
if err != nil {
log.Fatalf("Error creating blank response point\n error: %v\n", err)
}
// Add a tracer to prevent program from returning too early
tracer := stressClient.NewTracer(make(map[string]string))
// Add to the WaitGroup
tracer.Add(1)
// Make a new response with the point and the tracer
resp := stressClient.NewResponse(p, tracer)
return resp
}

View File

@@ -0,0 +1,32 @@
package statement
import (
"time"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
// ExecStatement run outside scripts. This functionality is not built out
// TODO: Wire up!
type ExecStatement struct {
StatementID string
Script string
runtime time.Duration
}
// SetID statisfies the Statement Interface
func (i *ExecStatement) SetID(s string) {
i.StatementID = s
}
// Run statisfies the Statement Interface
func (i *ExecStatement) Run(s *stressClient.StressTest) {
runtime := time.Now()
i.runtime = time.Since(runtime)
}
// Report statisfies the Statement Interface
func (i *ExecStatement) Report(s *stressClient.StressTest) string {
return ""
}

View File

@@ -0,0 +1,41 @@
package statement
import (
"testing"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
func TestExecSetID(t *testing.T) {
e := newTestExec()
newID := "oaijnifo"
e.SetID(newID)
if e.StatementID != newID {
t.Errorf("Expected: %v\nGot: %v\n", newID, e.StatementID)
}
}
func TestExecRun(t *testing.T) {
e := newTestExec()
s, _, _ := stressClient.NewTestStressTest()
e.Run(s)
if e == nil {
t.Fail()
}
}
func TestExecReport(t *testing.T) {
e := newTestExec()
s, _, _ := stressClient.NewTestStressTest()
rep := e.Report(s)
if rep != "" {
t.Fail()
}
}
func newTestExec() *ExecStatement {
return &ExecStatement{
StatementID: "fooID",
Script: "fooscript.txt",
}
}

View File

@@ -0,0 +1,176 @@
package statement
import (
crypto "crypto/rand"
"fmt"
"math/rand"
)
// ################
// # Function #
// ################
// Function is a struct that holds information for generating values in templated points
type Function struct {
Type string
Fn string
Argument int
Count int
}
// NewStringer creates a new Stringer
func (f *Function) NewStringer(series int) Stringer {
var fn Stringer
switch f.Type {
case "int":
fn = NewIntFunc(f.Fn, f.Argument)
case "float":
fn = NewFloatFunc(f.Fn, f.Argument)
case "str":
fn = NewStrFunc(f.Fn, f.Argument)
default:
fn = func() string { return "STRINGER ERROR" }
}
if int(f.Count) != 0 {
return cycle(f.Count, fn)
}
return nTimes(series, fn)
}
// ################
// # Stringers #
// ################
// Stringers is a collection of Stringer
type Stringers []Stringer
// Eval returns an array of all the Stringer functions evaluated once
func (s Stringers) Eval(time func() int64) []interface{} {
arr := make([]interface{}, len(s)+1)
for i, st := range s {
arr[i] = st()
}
arr[len(s)] = time()
return arr
}
// Stringer is a function that returns a string
type Stringer func() string
func randStr(n int) func() string {
return func() string {
b := make([]byte, n/2)
_, _ = crypto.Read(b)
return fmt.Sprintf("%x", b)
}
}
// NewStrFunc reates a new striger to create strings for templated writes
func NewStrFunc(fn string, arg int) Stringer {
switch fn {
case "rand":
return randStr(arg)
default:
return func() string { return "STR ERROR" }
}
}
func randFloat(n int) func() string {
return func() string {
return fmt.Sprintf("%v", rand.Intn(n))
}
}
func incFloat(n int) func() string {
i := n
return func() string {
s := fmt.Sprintf("%v", i)
i++
return s
}
}
// NewFloatFunc reates a new striger to create float values for templated writes
func NewFloatFunc(fn string, arg int) Stringer {
switch fn {
case "rand":
return randFloat(arg)
case "inc":
return incFloat(arg)
default:
return func() string { return "FLOAT ERROR" }
}
}
func randInt(n int) Stringer {
return func() string {
return fmt.Sprintf("%vi", rand.Intn(n))
}
}
func incInt(n int) Stringer {
i := n
return func() string {
s := fmt.Sprintf("%vi", i)
i++
return s
}
}
// NewIntFunc reates a new striger to create int values for templated writes
func NewIntFunc(fn string, arg int) Stringer {
switch fn {
case "rand":
return randInt(arg)
case "inc":
return incInt(arg)
default:
return func() string { return "INT ERROR" }
}
}
// nTimes will return the previous return value of a function
// n-many times before calling the function again
func nTimes(n int, fn Stringer) Stringer {
i := 0
t := fn()
return func() string {
i++
if i > n {
t = fn()
i = 1
}
return t
}
}
// cycle will cycle through a list of values before repeating them
func cycle(n int, fn Stringer) Stringer {
if n == 0 {
return fn
}
i := 0
cache := make([]string, n)
t := fn()
cache[i] = t
return func() string {
i++
if i < n {
cache[i] = fn()
}
t = cache[(i-1)%n]
return t
}
}

View File

@@ -0,0 +1,143 @@
package statement
import (
"testing"
)
func TestNewStrRandStringer(t *testing.T) {
function := newStrRandFunction()
strRandStringer := function.NewStringer(10)
s := strRandStringer()
if len(s) != function.Argument {
t.Errorf("Expected: %v\nGot: %v\n", function.Argument, len(s))
}
}
func TestNewIntIncStringer(t *testing.T) {
function := newIntIncFunction()
intIncStringer := function.NewStringer(10)
s := intIncStringer()
if s != "0i" {
t.Errorf("Expected: 0i\nGot: %v\n", s)
}
}
func TestNewIntRandStringer(t *testing.T) {
function := newIntRandFunction()
intRandStringer := function.NewStringer(10)
s := intRandStringer()
if parseInt(s[:len(s)-1]) > function.Argument {
t.Errorf("Expected value below: %v\nGot value: %v\n", function.Argument, s)
}
}
func TestNewFloatIncStringer(t *testing.T) {
function := newFloatIncFunction()
floatIncStringer := function.NewStringer(10)
s := floatIncStringer()
if parseFloat(s) != function.Argument {
t.Errorf("Expected value: %v\nGot: %v\n", function.Argument, s)
}
}
func TestNewFloatRandStringer(t *testing.T) {
function := newFloatRandFunction()
floatRandStringer := function.NewStringer(10)
s := floatRandStringer()
if parseFloat(s) > function.Argument {
t.Errorf("Expected value below: %v\nGot value: %v\n", function.Argument, s)
}
}
func TestStringersEval(t *testing.T) {
// Make the *Function(s)
strRandFunction := newStrRandFunction()
intIncFunction := newIntIncFunction()
intRandFunction := newIntRandFunction()
floatIncFunction := newFloatIncFunction()
floatRandFunction := newFloatRandFunction()
// Make the *Stringer(s)
strRandStringer := strRandFunction.NewStringer(10)
intIncStringer := intIncFunction.NewStringer(10)
intRandStringer := intRandFunction.NewStringer(10)
floatIncStringer := floatIncFunction.NewStringer(10)
floatRandStringer := floatRandFunction.NewStringer(10)
// Make the *Stringers
stringers := Stringers([]Stringer{strRandStringer, intIncStringer, intRandStringer, floatIncStringer, floatRandStringer})
// Spoff the Time function
// Call *Stringers.Eval
values := stringers.Eval(spoofTime)
// Check the strRandFunction
if len(values[0].(string)) != strRandFunction.Argument {
t.Errorf("Expected: %v\nGot: %v\n", strRandFunction.Argument, len(values[0].(string)))
}
// Check the intIncFunction
if values[1].(string) != "0i" {
t.Errorf("Expected: 0i\nGot: %v\n", values[1].(string))
}
// Check the intRandFunction
s := values[2].(string)
if parseInt(s[:len(s)-1]) > intRandFunction.Argument {
t.Errorf("Expected value below: %v\nGot value: %v\n", intRandFunction.Argument, s)
}
// Check the floatIncFunction
if parseFloat(values[3].(string)) != floatIncFunction.Argument {
t.Errorf("Expected value: %v\nGot: %v\n", floatIncFunction.Argument, values[3])
}
// Check the floatRandFunction
if parseFloat(values[4].(string)) > floatRandFunction.Argument {
t.Errorf("Expected value below: %v\nGot value: %v\n", floatRandFunction.Argument, values[4])
}
// Check the spoofTime func
if values[5] != 8 {
}
}
func spoofTime() int64 {
return int64(8)
}
func newStrRandFunction() *Function {
return &Function{
Type: "str",
Fn: "rand",
Argument: 8,
Count: 1000,
}
}
func newIntIncFunction() *Function {
return &Function{
Type: "int",
Fn: "inc",
Argument: 0,
Count: 0,
}
}
func newIntRandFunction() *Function {
return &Function{
Type: "int",
Fn: "rand",
Argument: 100,
Count: 1000,
}
}
func newFloatIncFunction() *Function {
return &Function{
Type: "float",
Fn: "inc",
Argument: 0,
Count: 1000,
}
}
func newFloatRandFunction() *Function {
return &Function{
Type: "float",
Fn: "rand",
Argument: 100,
Count: 1000,
}
}

View File

@@ -0,0 +1,40 @@
package statement
import (
"fmt"
"time"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
// GoStatement is a Statement Implementation to allow other statements to be run concurrently
type GoStatement struct {
Statement
StatementID string
}
// SetID statisfies the Statement Interface
func (i *GoStatement) SetID(s string) {
i.StatementID = s
}
// Run statisfies the Statement Interface
func (i *GoStatement) Run(s *stressClient.StressTest) {
// TODO: remove
switch i.Statement.(type) {
case *QueryStatement:
time.Sleep(1 * time.Second)
}
s.Add(1)
go func() {
i.Statement.Run(s)
s.Done()
}()
}
// Report statisfies the Statement Interface
func (i *GoStatement) Report(s *stressClient.StressTest) string {
return fmt.Sprintf("Go %v", i.Statement.Report(s))
}

View File

@@ -0,0 +1,41 @@
package statement
import (
"testing"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
func TestGoSetID(t *testing.T) {
e := newTestGo()
newID := "oaijnifo"
e.SetID(newID)
if e.StatementID != newID {
t.Errorf("Expected: %v\nGot: %v\n", newID, e.StatementID)
}
}
func TestGoRun(t *testing.T) {
e := newTestGo()
s, _, _ := stressClient.NewTestStressTest()
e.Run(s)
if e == nil {
t.Fail()
}
}
func TestGoReport(t *testing.T) {
e := newTestGo()
s, _, _ := stressClient.NewTestStressTest()
report := e.Report(s)
if report != "Go " {
t.Errorf("Expected: %v\nGot: %v\n", "Go ", report)
}
}
func newTestGo() *GoStatement {
return &GoStatement{
Statement: newTestExec(),
StatementID: "fooID",
}
}

View File

@@ -0,0 +1,69 @@
package statement
import (
"log"
"time"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
// InfluxqlStatement is a Statement Implementation that allows statements that parse in InfluxQL to be passed directly to the target instance
type InfluxqlStatement struct {
StatementID string
Query string
Tracer *stressClient.Tracer
}
func (i *InfluxqlStatement) tags() map[string]string {
tags := make(map[string]string)
return tags
}
// SetID statisfies the Statement Interface
func (i *InfluxqlStatement) SetID(s string) {
i.StatementID = s
}
// Run statisfies the Statement Interface
func (i *InfluxqlStatement) Run(s *stressClient.StressTest) {
// Set the tracer
i.Tracer = stressClient.NewTracer(i.tags())
// Make the Package
p := stressClient.NewPackage(stressClient.Query, []byte(i.Query), i.StatementID, i.Tracer)
// Increment the tracer
i.Tracer.Add(1)
// Send the Package
s.SendPackage(p)
// Wait for all operations to finish
i.Tracer.Wait()
}
// Report statisfies the Statement Interface
// No test coverage, fix
func (i *InfluxqlStatement) Report(s *stressClient.StressTest) (out string) {
allData := s.GetStatementResults(i.StatementID, "query")
iqlr := &influxQlReport{
statement: i.Query,
columns: allData[0].Series[0].Columns,
values: allData[0].Series[0].Values,
}
iqlr.responseTime = time.Duration(responseTimes(iqlr.columns, iqlr.values)[0].Value)
switch countSuccesses(iqlr.columns, iqlr.values) {
case 0:
iqlr.success = false
case 1:
iqlr.success = true
default:
log.Fatal("Error fetching response for InfluxQL statement")
}
return iqlr.String()
}

View File

@@ -0,0 +1,44 @@
package statement
import (
"testing"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
func TestInfluxQlSetID(t *testing.T) {
e := newTestInfluxQl()
newID := "oaijnifo"
e.SetID(newID)
if e.StatementID != newID {
t.Errorf("Expected: %v\nGot: %v\n", newID, e.StatementID)
}
}
func TestInfluxQlRun(t *testing.T) {
e := newTestInfluxQl()
s, packageCh, _ := stressClient.NewTestStressTest()
go func() {
for pkg := range packageCh {
if pkg.T != stressClient.Query {
t.Errorf("Expected package to be Query\nGot: %v", pkg.T)
}
if string(pkg.Body) != e.Query {
t.Errorf("Expected query: %v\nGot: %v", e.Query, string(pkg.Body))
}
if pkg.StatementID != e.StatementID {
t.Errorf("Expected statementID: %v\nGot: %v", e.StatementID, pkg.StatementID)
}
pkg.Tracer.Done()
}
}()
e.Run(s)
}
func newTestInfluxQl() *InfluxqlStatement {
return &InfluxqlStatement{
Query: "CREATE DATABASE foo",
Tracer: stressClient.NewTracer(make(map[string]string)),
StatementID: "fooID",
}
}

View File

@@ -0,0 +1,214 @@
package statement
import (
"bytes"
"fmt"
"log"
"strconv"
"strings"
"sync"
"time"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
// InsertStatement is a Statement Implementation that creates points to be written to the target InfluxDB instance
type InsertStatement struct {
TestID string
StatementID string
// Statement Name
Name string
// Template string for points. Filled by the output of stringers
TemplateString string
// TagCount is used to find the number of series in the dataset
TagCount int
// The Tracer prevents InsertStatement.Run() from returning early
Tracer *stressClient.Tracer
// Timestamp is #points to write and percision
Timestamp *Timestamp
// Templates turn into stringers
Templates Templates
stringers Stringers
// Number of series in this insert Statement
series int
// Returns the proper time for the next point
time func() int64
// Concurrency utiliities
sync.WaitGroup
sync.Mutex
// Timer for runtime and pps calculation
runtime time.Duration
}
func (i *InsertStatement) tags() map[string]string {
tags := map[string]string{
"number_fields": i.numFields(),
"number_series": fmtInt(i.series),
"number_points_write": fmtInt(i.Timestamp.Count),
}
return tags
}
// SetID statisfies the Statement Interface
func (i *InsertStatement) SetID(s string) {
i.StatementID = s
}
// SetVars sets up the environment for InsertStatement to call it's Run function
func (i *InsertStatement) SetVars(s *stressClient.StressTest) chan<- string {
// Set the #series at 1 to start
i.series = 1
// Num series is the product of the cardinality of the tags
for _, tmpl := range i.Templates[0:i.TagCount] {
i.series *= tmpl.numSeries()
}
// make stringers from the templates
i.stringers = i.Templates.Init(i.series)
// Set the time function, keeps track of 'time' of the points being created
i.time = i.Timestamp.Time(s.StartDate, i.series, s.Precision)
// Set a commune on the StressTest
s.Lock()
comCh := s.SetCommune(i.Name)
s.Unlock()
// Set the tracer
i.Tracer = stressClient.NewTracer(i.tags())
return comCh
}
// Run statisfies the Statement Interface
func (i *InsertStatement) Run(s *stressClient.StressTest) {
// Set variables on the InsertStatement and make the comCh
comCh := i.SetVars(s)
// TODO: Refactor to eleminate the ctr
// Start the counter
ctr := 0
// Create the first bytes buffer
buf := bytes.NewBuffer([]byte{})
runtime := time.Now()
for k := 0; k < i.Timestamp.Count; k++ {
// Increment the counter. ctr == k + 1?
ctr++
// Make the point from the template string and the stringers
point := fmt.Sprintf(i.TemplateString, i.stringers.Eval(i.time)...)
// Add the string to the buffer
buf.WriteString(point)
// Add a newline char to seperate the points
buf.WriteString("\n")
// If len(batch) == batchSize then send it
if ctr%s.BatchSize == 0 && ctr != 0 {
b := buf.Bytes()
// Trimming the trailing newline character
b = b[0 : len(b)-1]
// Create the package
p := stressClient.NewPackage(stressClient.Write, b, i.StatementID, i.Tracer)
// Use Tracer to wait for all operations to finish
i.Tracer.Add(1)
// Send the package
s.SendPackage(p)
// Reset the bytes Buffer
temp := bytes.NewBuffer([]byte{})
buf = temp
}
// TODO: Racy
// Has to do with InsertStatement and QueryStatement communication
if len(comCh) < cap(comCh) {
select {
case comCh <- point:
break
default:
break
}
}
}
// If There are additional points remaining in the buffer send them before exiting
if buf.Len() != 0 {
b := buf.Bytes()
// Trimming the trailing newline character
b = b[0 : len(b)-1]
// Create the package
p := stressClient.NewPackage(stressClient.Write, b, i.StatementID, i.Tracer)
// Use Tracer to wait for all operations to finish
i.Tracer.Add(1)
// Send the package
s.SendPackage(p)
}
// Wait for all tracers to decrement
i.Tracer.Wait()
// Stop the timer
i.runtime = time.Since(runtime)
}
// Report statisfies the Statement Interface
func (i *InsertStatement) Report(s *stressClient.StressTest) string {
// Pull data via StressTest client
allData := s.GetStatementResults(i.StatementID, "write")
if allData == nil || allData[0].Series == nil {
log.Fatalf("No data returned for write report\n Statement Name: %v\n Statement ID: %v\n", i.Name, i.StatementID)
}
ir := &insertReport{
name: i.Name,
columns: allData[0].Series[0].Columns,
values: allData[0].Series[0].Values,
}
responseTimes := responseTimes(ir.columns, ir.values)
ir.percentile = percentile(responseTimes)
ir.avgResponseTime = avgDuration(responseTimes)
ir.stdDevResponseTime = stddevDuration(responseTimes)
ir.pointsPerSecond = int(float64(i.Timestamp.Count) / i.runtime.Seconds())
ir.numRetries = countRetries(ir.columns, ir.values)
ir.successfulWrites = countSuccesses(ir.columns, ir.values)
ir.avgRequestBytes = numberBytes(ir.columns, ir.values)
return ir.String()
}
func (i *InsertStatement) numFields() string {
pt := strings.Split(i.TemplateString, " ")
fields := strings.Split(pt[1], ",")
return fmtInt(len(fields))
}
func fmtInt(i int) string {
return strconv.FormatInt(int64(i), 10)
}

View File

@@ -0,0 +1,50 @@
package statement
import (
"strings"
"testing"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
func TestInsertSetID(t *testing.T) {
e := newTestInsert()
newID := "oaijnifo"
e.SetID(newID)
if e.StatementID != newID {
t.Errorf("Expected: %v\nGot: %v\n", newID, e.StatementID)
}
}
func TestInsertRun(t *testing.T) {
i := newTestInsert()
s, packageCh, _ := stressClient.NewTestStressTest()
// Listen to the other side of the directiveCh
go func() {
for pkg := range packageCh {
countPoints := i.Timestamp.Count
batchSize := s.BatchSize
got := len(strings.Split(string(pkg.Body), "\n"))
switch got {
case countPoints % batchSize:
case batchSize:
default:
t.Errorf("countPoints: %v\nbatchSize: %v\ngot: %v\n", countPoints, batchSize, got)
}
pkg.Tracer.Done()
}
}()
i.Run(s)
}
func newTestInsert() *InsertStatement {
return &InsertStatement{
TestID: "foo_test",
StatementID: "foo_ID",
Name: "foo_name",
TemplateString: "cpu,%v %v %v",
Timestamp: newTestTimestamp(),
Templates: newTestTemplates(),
TagCount: 1,
}
}

View File

@@ -0,0 +1,161 @@
package statement
import (
"fmt"
"log"
"time"
"github.com/influxdata/influxdb/models"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
// QueryStatement is a Statement Implementation to run queries on the target InfluxDB instance
type QueryStatement struct {
StatementID string
Name string
// TemplateString is a query template that can be filled in by Args
TemplateString string
Args []string
// Number of queries to run
Count int
// Tracer for tracking returns
Tracer *stressClient.Tracer
// track time for all queries
runtime time.Duration
}
// This function adds tags to the recording points
func (i *QueryStatement) tags() map[string]string {
tags := make(map[string]string)
return tags
}
// SetID statisfies the Statement Interface
func (i *QueryStatement) SetID(s string) {
i.StatementID = s
}
// Run statisfies the Statement Interface
func (i *QueryStatement) Run(s *stressClient.StressTest) {
i.Tracer = stressClient.NewTracer(i.tags())
vals := make(map[string]interface{})
var point models.Point
runtime := time.Now()
for j := 0; j < i.Count; j++ {
// If the query is a simple query, send it.
if len(i.Args) == 0 {
b := []byte(i.TemplateString)
// Make the package
p := stressClient.NewPackage(stressClient.Query, b, i.StatementID, i.Tracer)
// Increment the tracer
i.Tracer.Add(1)
// Send the package
s.SendPackage(p)
} else {
// Otherwise cherry pick field values from the commune?
// TODO: Currently the program lock up here if s.GetPoint
// cannot return a value, which can happen.
// See insert.go
s.Lock()
point = s.GetPoint(i.Name, s.Precision)
s.Unlock()
setMapValues(vals, point)
// Set the template string with args from the commune
b := []byte(fmt.Sprintf(i.TemplateString, setArgs(vals, i.Args)...))
// Make the package
p := stressClient.NewPackage(stressClient.Query, b, i.StatementID, i.Tracer)
// Increment the tracer
i.Tracer.Add(1)
// Send the package
s.SendPackage(p)
}
}
// Wait for all operations to finish
i.Tracer.Wait()
// Stop time timer
i.runtime = time.Since(runtime)
}
// Report statisfies the Statement Interface
func (i *QueryStatement) Report(s *stressClient.StressTest) string {
// Pull data via StressTest client
allData := s.GetStatementResults(i.StatementID, "query")
if len(allData) == 0 || allData[0].Series == nil {
log.Fatalf("No data returned for query report\n Statement Name: %v\n Statement ID: %v\n", i.Name, i.StatementID)
}
qr := &queryReport{
name: i.Name,
columns: allData[0].Series[0].Columns,
values: allData[0].Series[0].Values,
}
responseTimes := responseTimes(qr.columns, qr.values)
qr.percentile = percentile(responseTimes)
qr.avgResponseTime = avgDuration(responseTimes)
qr.stdDevResponseTime = stddevDuration(responseTimes)
qr.successfulReads = countSuccesses(qr.columns, qr.values)
qr.responseBytes = numberBytes(qr.columns, qr.values)
return qr.String()
}
func getRandomTagPair(m models.Tags) string {
for k, v := range m {
return fmt.Sprintf("%v='%v'", k, v)
}
return ""
}
func getRandomFieldKey(m map[string]interface{}) string {
for k := range m {
return fmt.Sprintf("%v", k)
}
return ""
}
func setMapValues(m map[string]interface{}, p models.Point) {
fields, err := p.Fields()
if err != nil {
panic(err)
}
m["%f"] = getRandomFieldKey(fields)
m["%m"] = string(p.Name())
m["%t"] = getRandomTagPair(p.Tags())
m["%a"] = p.UnixNano()
}
func setArgs(m map[string]interface{}, args []string) []interface{} {
values := make([]interface{}, len(args))
for i, arg := range args {
values[i] = m[arg]
}
return values
}

View File

@@ -0,0 +1,42 @@
package statement
import (
"testing"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
func TestQuerySetID(t *testing.T) {
e := newTestQuery()
newID := "oaijnifo"
e.SetID(newID)
if e.StatementID != newID {
t.Errorf("Expected: %v\nGot: %v\n", newID, e.StatementID)
}
}
func TestQueryRun(t *testing.T) {
i := newTestQuery()
s, packageCh, _ := stressClient.NewTestStressTest()
// Listen to the other side of the directiveCh
go func() {
for pkg := range packageCh {
if i.TemplateString != string(pkg.Body) {
t.Fail()
}
pkg.Tracer.Done()
}
}()
i.Run(s)
}
func newTestQuery() *QueryStatement {
return &QueryStatement{
StatementID: "foo_ID",
Name: "foo_name",
TemplateString: "SELECT count(value) FROM cpu",
Args: []string{},
Count: 5,
Tracer: stressClient.NewTracer(map[string]string{}),
}
}

View File

@@ -0,0 +1,237 @@
package statement
import (
"encoding/json"
"fmt"
"log"
"math"
"sort"
"time"
influx "github.com/influxdata/influxdb/client/v2"
)
// TODO: Refactor this file to utilize a common interface
// This will make adding new reports easier in the future
// Runs performance numbers for insert statements
type insertReport struct {
name string
numRetries int
pointsPerSecond int
successfulWrites int
avgRequestBytes int
avgResponseTime time.Duration
stdDevResponseTime time.Duration
percentile time.Duration
columns []string
values [][]interface{}
}
// Returns the version of the report that is output to STDOUT
func (ir *insertReport) String() string {
tmplString := `Write Statement: %v
Points/Sec: %v
Resp Time Average: %v
Resp Time Standard Deviation: %v
95th Percentile Write Response: %v
Average Request Bytes: %v
Successful Write Reqs: %v
Retries: %v`
return fmt.Sprintf(tmplString,
ir.name,
ir.pointsPerSecond,
ir.avgResponseTime,
ir.stdDevResponseTime,
ir.percentile,
ir.avgRequestBytes,
ir.successfulWrites,
ir.numRetries)
}
// Returns a point representation of the report to be written to the ResultsDB
func (ir *insertReport) Point() *influx.Point {
measurement := "testDefault"
tags := map[string]string{}
fields := map[string]interface{}{"field": "blank"}
point, err := influx.NewPoint(measurement, tags, fields, time.Now())
if err != nil {
log.Fatalf("Error creating insertReport point\n measurement: %v\n tags: %v\n fields: %v\n error: %v\n", measurement, tags, fields, err)
}
return point
}
// Runs performance numbers for query statements
type queryReport struct {
name string
successfulReads int
responseBytes int
stddevResponseBytes int
avgResponseTime time.Duration
stdDevResponseTime time.Duration
percentile time.Duration
columns []string
values [][]interface{}
}
// Returns the version of the report that is output to STDOUT
func (qr *queryReport) String() string {
tmplString := `Query Statement: %v
Resp Time Average: %v
Resp Time Standard Deviation: %v
95th Percentile Read Response: %v
Query Resp Bytes Average: %v bytes
Successful Queries: %v`
return fmt.Sprintf(tmplString,
qr.name,
qr.avgResponseTime,
qr.stdDevResponseTime,
qr.percentile,
qr.responseBytes,
qr.successfulReads)
}
// Returns a point representation of the report to be written to the ResultsDB
func (qr *queryReport) Point() *influx.Point {
measurement := "testDefault"
tags := map[string]string{}
fields := map[string]interface{}{"field": "blank"}
point, err := influx.NewPoint(measurement, tags, fields, time.Now())
if err != nil {
log.Fatalf("Error creating queryReport point\n measurement: %v\n tags: %v\n fields: %v\n error: %v\n", measurement, tags, fields, err)
}
return point
}
// Runs performance numbers for InfluxQL statements
type influxQlReport struct {
statement string
responseTime time.Duration
success bool
columns []string
values [][]interface{}
}
// Returns the version of the report that is output to STDOUT
func (iqlr *influxQlReport) String() string {
// Fancy format success
var success string
switch iqlr.success {
case true:
success = "[√]"
case false:
success = "[X]"
}
return fmt.Sprintf("%v '%v' -> %v", success, iqlr.statement, iqlr.responseTime)
}
// Returns a point representation of the report to be written to the ResultsDB
func (iqlr *influxQlReport) Point() *influx.Point {
measurement := "testDefault"
tags := map[string]string{}
fields := map[string]interface{}{"field": "blank"}
point, err := influx.NewPoint(measurement, tags, fields, time.Now())
if err != nil {
log.Fatalf("Error creating influxQL point\n measurement: %v\n tags: %v\n fields: %v\n error: %v\n", measurement, tags, fields, err)
}
return point
}
// Given a field or tag name this function returns the index where the values are found
func getColumnIndex(col string, columns []string) int {
index := -1
for i, column := range columns {
if column == col {
index = i
}
}
return index
}
// Given a full set of results pulls the average num_bytes
func numberBytes(columns []string, values [][]interface{}) int {
out := 0
index := getColumnIndex("num_bytes", columns)
for _, val := range values {
reqBytes, err := val[index].(json.Number).Int64()
if err != nil {
log.Fatalf("Error coercing json.Number to Int64\n json.Number:%v\n error: %v\n", val[index], err)
}
out += int(reqBytes)
}
return out / len(values)
}
// Counts the number of 200(query) or 204(write) responses and returns them
func countSuccesses(columns []string, values [][]interface{}) (out int) {
index := getColumnIndex("status_code", columns)
for _, val := range values {
status, err := val[index].(json.Number).Int64()
if err != nil {
log.Fatalf("Error coercing json.Number to Int64\n json.Number:%v\n error: %v\n", val[index], err)
}
if status == 204 || status == 200 {
out++
}
}
return out
}
// Counts number of 500 status codes
func countRetries(columns []string, values [][]interface{}) (out int) {
index := getColumnIndex("status_code", columns)
for _, val := range values {
status, err := val[index].(json.Number).Int64()
if err != nil {
log.Fatalf("Error coercing json.Number to Int64\n json.Number:%v\n error: %v\n", val[index], err)
}
if status == 500 {
out++
}
}
return out
}
// Pulls out the response_time_ns values and formats them into ResponseTimes for reporting
func responseTimes(columns []string, values [][]interface{}) (rs ResponseTimes) {
rs = make([]ResponseTime, 0)
index := getColumnIndex("response_time_ns", columns)
for _, val := range values {
respTime, err := val[index].(json.Number).Int64()
if err != nil {
log.Fatalf("Error coercing json.Number to Int64\n json.Number:%v\n error: %v\n", val[index], err)
}
rs = append(rs, NewResponseTime(int(respTime)))
}
return rs
}
// Returns the 95th perecntile response time
func percentile(rs ResponseTimes) time.Duration {
sort.Sort(rs)
return time.Duration(rs[(len(rs) * 19 / 20)].Value)
}
// Returns the average response time
func avgDuration(rs ResponseTimes) (out time.Duration) {
for _, t := range rs {
out += time.Duration(t.Value)
}
return out / time.Duration(len(rs))
}
// Returns the standard deviation of a sample of response times
func stddevDuration(rs ResponseTimes) (out time.Duration) {
avg := avgDuration(rs)
for _, t := range rs {
out += (avg - time.Duration(t.Value)) * (avg - time.Duration(t.Value))
}
return time.Duration(int64(math.Sqrt(float64(out) / float64(len(rs)))))
}

View File

@@ -0,0 +1,210 @@
package statement
import (
"encoding/json"
"fmt"
"strings"
"testing"
"time"
)
func TestInsertReportString(t *testing.T) {
ir := newTestInsertReport()
tmplString := `Write Statement: %v
Points/Sec: %v
Resp Time Average: %v
Resp Time Standard Deviation: %v
95th Percentile Write Response: %v
Average Request Bytes: %v
Successful Write Reqs: %v
Retries: %v`
expected := fmt.Sprintf(tmplString,
ir.name,
ir.pointsPerSecond,
ir.avgResponseTime,
ir.stdDevResponseTime,
ir.percentile,
ir.avgRequestBytes,
ir.successfulWrites,
ir.numRetries)
got := ir.String()
if expected != got {
t.Fail()
}
}
func TestInsertReportPoint(t *testing.T) {
ir := newTestInsertReport()
expected := "testDefault"
got := strings.Split(ir.Point().String(), " ")[0]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func TestQueryReportString(t *testing.T) {
qr := newTestQueryReport()
tmplString := `Query Statement: %v
Resp Time Average: %v
Resp Time Standard Deviation: %v
95th Percentile Read Response: %v
Query Resp Bytes Average: %v bytes
Successful Queries: %v`
expected := fmt.Sprintf(tmplString,
qr.name,
qr.avgResponseTime,
qr.stdDevResponseTime,
qr.percentile,
qr.responseBytes,
qr.successfulReads)
got := qr.String()
if expected != got {
t.Fail()
}
}
func TestQueryReportPoint(t *testing.T) {
qr := newTestQueryReport()
expected := "testDefault"
got := strings.Split(qr.Point().String(), " ")[0]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func TestInfluxQLReportString(t *testing.T) {
iqlr := newTestInfluxQLReport()
expected := fmt.Sprintf("[X] '%v' -> %v", iqlr.statement, iqlr.responseTime)
got := iqlr.String()
if expected != got {
t.Fail()
}
}
func TestInfluxQLReportPoint(t *testing.T) {
iqlr := newTestInfluxQLReport()
expected := "testDefault"
got := strings.Split(iqlr.Point().String(), " ")[0]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func newTestInsertReport() *insertReport {
return &insertReport{
name: "foo_name",
numRetries: 0,
pointsPerSecond: 500000,
successfulWrites: 20000,
avgRequestBytes: 18932,
avgResponseTime: time.Duration(int64(20000)),
stdDevResponseTime: time.Duration(int64(20000)),
percentile: time.Duration(int64(20000)),
}
}
func newTestQueryReport() *queryReport {
return &queryReport{
name: "foo_name",
successfulReads: 2000,
responseBytes: 39049,
stddevResponseBytes: 9091284,
avgResponseTime: 139082,
stdDevResponseTime: 29487,
percentile: 8273491,
}
}
func newTestInfluxQLReport() *influxQlReport {
return &influxQlReport{
statement: "foo_name",
responseTime: time.Duration(int64(20000)),
success: false,
}
}
func TestGetColumnIndex(t *testing.T) {
col := "thing"
columns := []string{"thing"}
expected := 0
got := getColumnIndex(col, columns)
if expected != got {
t.Fail()
}
}
func TestNumberBytes(t *testing.T) {
columns := []string{"num_bytes"}
values := [][]interface{}{[]interface{}{json.Number("1")}}
expected := 1
got := numberBytes(columns, values)
if expected != got {
t.Fail()
}
}
func TestCountSuccesses(t *testing.T) {
columns := []string{"status_code"}
values := [][]interface{}{[]interface{}{json.Number("200")}}
expected := 1
got := countSuccesses(columns, values)
if expected != got {
t.Fail()
}
}
func TestCountRetries(t *testing.T) {
columns := []string{"status_code"}
values := [][]interface{}{[]interface{}{json.Number("500")}}
expected := 1
got := countRetries(columns, values)
if expected != got {
t.Fail()
}
}
func TestResponseTimes(t *testing.T) {
columns := []string{"response_time_ns"}
values := [][]interface{}{[]interface{}{json.Number("380")}}
expected := ResponseTimes([]ResponseTime{NewResponseTime(380)})
got := responseTimes(columns, values)
if expected[0].Value != got[0].Value {
t.Fail()
}
}
func TestPercentile(t *testing.T) {
rs := createTestResponseTimes()
expected := time.Duration(21)
got := percentile(rs)
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func TestAvgDuration(t *testing.T) {
rs := createTestResponseTimes()
expected := time.Duration(11)
got := avgDuration(rs)
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func TestStddevDuration(t *testing.T) {
rs := createTestResponseTimes()
expected := time.Duration(6)
got := stddevDuration(rs)
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func createTestResponseTimes() ResponseTimes {
rstms := []int{1, 2, 3, 4, 5, 6, 7, 13, 14, 15, 16, 17, 18, 19, 8, 9, 10, 11, 12, 20, 21, 22}
rs := []ResponseTime{}
for _, rst := range rstms {
rs = append(rs, NewResponseTime(rst))
}
return rs
}

View File

@@ -0,0 +1,40 @@
package statement
import (
"time"
)
// ResponseTime is a struct that contains `Value`
// `Time` pairing.
type ResponseTime struct {
Value int
Time time.Time
}
// NewResponseTime returns a new response time
// with value `v` and time `time.Now()`.
func NewResponseTime(v int) ResponseTime {
r := ResponseTime{Value: v, Time: time.Now()}
return r
}
// ResponseTimes is a slice of response times
type ResponseTimes []ResponseTime
// Implements the `Len` method for the
// sort.Interface type
func (rs ResponseTimes) Len() int {
return len(rs)
}
// Implements the `Less` method for the
// sort.Interface type
func (rs ResponseTimes) Less(i, j int) bool {
return rs[i].Value < rs[j].Value
}
// Implements the `Swap` method for the
// sort.Interface type
func (rs ResponseTimes) Swap(i, j int) {
rs[i], rs[j] = rs[j], rs[i]
}

View File

@@ -0,0 +1,45 @@
package statement
import (
"testing"
)
func TestNewResponseTime(t *testing.T) {
value := 100000
rs := NewResponseTime(value)
if rs.Value != value {
t.Errorf("expected: %v\ngot: %v\n", value, rs.Value)
}
}
func newResponseTimes() ResponseTimes {
return []ResponseTime{
NewResponseTime(100),
NewResponseTime(10),
}
}
func TestResponseTimeLen(t *testing.T) {
rs := newResponseTimes()
if rs.Len() != 2 {
t.Fail()
}
}
func TestResponseTimeLess(t *testing.T) {
rs := newResponseTimes()
less := rs.Less(1, 0)
if !less {
t.Fail()
}
}
func TestResponseTimeSwap(t *testing.T) {
rs := newResponseTimes()
rs0 := rs[0]
rs1 := rs[1]
rs.Swap(0, 1)
if rs0 != rs[1] || rs1 != rs[0] {
t.Fail()
}
}

View File

@@ -0,0 +1,59 @@
package statement
import (
"fmt"
"strings"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
// SetStatement set state variables for the test
type SetStatement struct {
Var string
Value string
StatementID string
Tracer *stressClient.Tracer
}
// SetID statisfies the Statement Interface
func (i *SetStatement) SetID(s string) {
i.StatementID = s
}
// Run statisfies the Statement Interface
func (i *SetStatement) Run(s *stressClient.StressTest) {
i.Tracer = stressClient.NewTracer(make(map[string]string))
d := stressClient.NewDirective(strings.ToLower(i.Var), strings.ToLower(i.Value), i.Tracer)
switch d.Property {
// Needs to be set on both StressTest and stressClient
// Set the write percison for points generated
case "precision":
s.Precision = d.Value
i.Tracer.Add(1)
s.SendDirective(d)
// Lives on StressTest
// Set the date for the first point entered into the database
case "startdate":
s.Lock()
s.StartDate = d.Value
s.Unlock()
// Lives on StressTest
// Set the BatchSize for writes
case "batchsize":
s.Lock()
s.BatchSize = parseInt(d.Value)
s.Unlock()
// All other variables live on stressClient
default:
i.Tracer.Add(1)
s.SendDirective(d)
}
i.Tracer.Wait()
}
// Report statisfies the Statement Interface
func (i *SetStatement) Report(s *stressClient.StressTest) string {
return fmt.Sprintf("SET %v = '%v'", i.Var, i.Value)
}

View File

@@ -0,0 +1,92 @@
package statement
import (
"fmt"
"testing"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
func TestSetSetID(t *testing.T) {
e := newTestSet("database", "foo")
newID := "oaijnifo"
e.SetID(newID)
if e.StatementID != newID {
t.Errorf("Expected: %v\nGot: %v\n", newID, e.StatementID)
}
}
func TestSetRun(t *testing.T) {
properties := []string{
"precision",
"startdate",
"batchsize",
"resultsaddress",
"testname",
"addresses",
"writeinterval",
"queryinterval",
"database",
"writeconcurrency",
"queryconcurrency",
}
for _, prop := range properties {
testSetRunUtl(t, prop, "1")
}
}
func testSetRunUtl(t *testing.T, property string, value string) {
i := newTestSet(property, value)
s, _, directiveCh := stressClient.NewTestStressTest()
// Listen to the other side of the directiveCh
go func() {
for d := range directiveCh {
if i.Var != d.Property {
t.Errorf("wrong property sent to stressClient\n expected: %v\n got: %v\n", i.Var, d.Property)
}
if i.Value != d.Value {
t.Errorf("wrong value sent to stressClient\n expected: %v\n got: %v\n", i.Value, d.Value)
}
d.Tracer.Done()
}
}()
// Run the statement
i.Run(s)
// Check the result
switch i.Var {
case "precision":
if i.Value != s.Precision {
t.Errorf("Failed to set %v\n", i.Var)
}
case "startdate":
if i.Value != s.StartDate {
t.Errorf("Failed to set %v\n", i.Var)
}
case "batchsize":
if parseInt(i.Value) != s.BatchSize {
t.Errorf("Failed to set %v\n", i.Var)
}
// TODO: Actually test this
case "resultsaddress":
default:
}
}
func TestSetReport(t *testing.T) {
set := newTestSet("this", "that")
s, _, _ := stressClient.NewTestStressTest()
rpt := set.Report(s)
expected := fmt.Sprintf("SET %v = '%v'", set.Var, set.Value)
if rpt != expected {
t.Errorf("expected: %v\ngot: %v\n", expected, rpt)
}
}
func newTestSet(toSet, value string) *SetStatement {
return &SetStatement{
Var: toSet,
Value: value,
Tracer: stressClient.NewTracer(make(map[string]string)),
StatementID: "fooID",
}
}

View File

@@ -0,0 +1,32 @@
package statement
import (
"log"
"strconv"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
// Statement is the common interface to shape the testing environment and prepare database requests
// The parser turns the 'statements' in the config file into Statements
type Statement interface {
Run(s *stressClient.StressTest)
Report(s *stressClient.StressTest) string
SetID(s string)
}
func parseInt(s string) int {
i, err := strconv.ParseInt(s, 10, 64)
if err != nil {
log.Fatalf("Error parsing integer:\n String: %v\n Error: %v\n", s, err)
}
return int(i)
}
func parseFloat(s string) int {
i, err := strconv.ParseFloat(s, 64)
if err != nil {
log.Fatalf("Error parsing integer:\n String: %v\n Error: %v\n", s, err)
}
return int(i)
}

View File

@@ -0,0 +1,47 @@
package statement
// A Template contains all information to fill in templated variables in inset and query statements
type Template struct {
Tags []string
Function *Function
}
// Templates are a collection of Template
type Templates []*Template
// Init makes Stringers out of the Templates for quick point creation
func (t Templates) Init(seriesCount int) Stringers {
arr := make([]Stringer, len(t))
for i, tmp := range t {
if len(tmp.Tags) == 0 {
arr[i] = tmp.Function.NewStringer(seriesCount)
continue
}
arr[i] = tmp.NewTagFunc()
}
return arr
}
// Calculates the number of series implied by a template
func (t *Template) numSeries() int {
// If !t.Tags then tag cardinality is t.Function.Count
if len(t.Tags) == 0 {
return t.Function.Count
}
// Else tag cardinality is len(t.Tags)
return len(t.Tags)
}
// NewTagFunc returns a Stringer that loops through the given tags
func (t *Template) NewTagFunc() Stringer {
if len(t.Tags) == 0 {
return func() string { return "EMPTY TAGS" }
}
i := 0
return func() string {
s := t.Tags[i]
i = (i + 1) % len(t.Tags)
return s
}
}

View File

@@ -0,0 +1,72 @@
package statement
import (
"testing"
)
func TestNewTagFunc(t *testing.T) {
wtags := newTestTagsTemplate()
wfunc := newTestFunctionTemplate()
expected := wtags.Tags[0]
got := wtags.NewTagFunc()()
if got != expected {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
expected = "EMPTY TAGS"
got = wfunc.NewTagFunc()()
if got != expected {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func TestNumSeries(t *testing.T) {
wtags := newTestTagsTemplate()
wfunc := newTestFunctionTemplate()
expected := len(wtags.Tags)
got := wtags.numSeries()
if got != expected {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
expected = wfunc.Function.Count
got = wfunc.numSeries()
if got != expected {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func TestTemplatesInit(t *testing.T) {
tmpls := newTestTemplates()
s := tmpls.Init(5)
vals := s.Eval(spoofTime)
expected := tmpls[0].Tags[0]
got := vals[0]
if got != expected {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
expected = "0i"
got = vals[1]
if got != expected {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func newTestTemplates() Templates {
return []*Template{
newTestTagsTemplate(),
newTestFunctionTemplate(),
}
}
func newTestTagsTemplate() *Template {
return &Template{
Tags: []string{"thing", "other_thing"},
}
}
func newTestFunctionTemplate() *Template {
return &Template{
Function: newIntIncFunction(),
}
}

View File

@@ -0,0 +1,51 @@
package statement
import (
"log"
"time"
)
// A Timestamp contains all informaiton needed to generate timestamps for points created by InsertStatements
type Timestamp struct {
Count int
Duration time.Duration
Jitter bool
}
// Time returns the next timestamp needed by the InsertStatement
func (t *Timestamp) Time(startDate string, series int, precision string) func() int64 {
var start time.Time
var err error
if startDate == "now" {
start = time.Now()
} else {
start, err = time.Parse("2006-01-02", startDate)
}
if err != nil {
log.Fatalf("Error parsing start time from StartDate\n string: %v\n error: %v\n", startDate, err)
}
return nextTime(start, t.Duration, series, precision)
}
func nextTime(ti time.Time, step time.Duration, series int, precision string) func() int64 {
t := ti
count := 0
return func() int64 {
count++
if count > series {
t = t.Add(step)
count = 1
}
var timestamp int64
if precision == "s" {
timestamp = t.Unix()
} else {
timestamp = t.UnixNano()
}
return timestamp
}
}

View File

@@ -0,0 +1,31 @@
package statement
import (
"testing"
"time"
)
func TestTimestampTime(t *testing.T) {
tstp := newTestTimestamp()
function := tstp.Time("2016-01-01", 100, "s")
expected := int64(1451606400)
got := function()
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
function = tstp.Time("now", 100, "ns")
expected = time.Now().UnixNano()
got = function()
if expected < got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func newTestTimestamp() *Timestamp {
duration, _ := time.ParseDuration("10s")
return &Timestamp{
Count: 5001,
Duration: duration,
Jitter: false,
}
}

View File

@@ -0,0 +1,32 @@
package statement
import (
"fmt"
"time"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
// WaitStatement is a Statement Implementation to prevent the test from returning to early when running GoStatements
type WaitStatement struct {
StatementID string
runtime time.Duration
}
// SetID statisfies the Statement Interface
func (w *WaitStatement) SetID(s string) {
w.StatementID = s
}
// Run statisfies the Statement Interface
func (w *WaitStatement) Run(s *stressClient.StressTest) {
runtime := time.Now()
s.Wait()
w.runtime = time.Since(runtime)
}
// Report statisfies the Statement Interface
func (w *WaitStatement) Report(s *stressClient.StressTest) string {
return fmt.Sprintf("WAIT -> %v", w.runtime)
}

View File

@@ -0,0 +1,41 @@
package statement
import (
"strings"
"testing"
"github.com/influxdata/influxdb/stress/v2/stress_client"
)
func TestWaitSetID(t *testing.T) {
e := newTestWait()
newID := "oaijnifo"
e.SetID(newID)
if e.StatementID != newID {
t.Errorf("Expected: %v\ngott: %v\n", newID, e.StatementID)
}
}
func TestWaitRun(t *testing.T) {
e := newTestWait()
s, _, _ := stressClient.NewTestStressTest()
e.Run(s)
if e == nil {
t.Fail()
}
}
func TestWaitReport(t *testing.T) {
e := newTestWait()
s, _, _ := stressClient.NewTestStressTest()
rpt := e.Report(s)
if !strings.Contains(rpt, "WAIT") {
t.Fail()
}
}
func newTestWait() *WaitStatement {
return &WaitStatement{
StatementID: "fooID",
}
}

View File

@@ -0,0 +1,58 @@
package stressClient
import (
"log"
"time"
"github.com/influxdata/influxdb/models"
)
// Communes are a method for passing points between InsertStatements and QueryStatements.
type commune struct {
ch chan string
storedPoint models.Point
}
// NewCommune creates a new commune with a buffered chan of length n
func newCommune(n int) *commune {
return &commune{ch: make(chan string, n)}
}
func (c *commune) point(precision string) models.Point {
pt := []byte(<-c.ch)
p, err := models.ParsePointsWithPrecision(pt, time.Now().UTC(), precision)
if err != nil {
log.Fatalf("Error parsing point for commune\n point: %v\n error: %v\n", pt, err)
}
if len(p) == 0 {
return c.storedPoint
}
c.storedPoint = p[0]
return p[0]
}
// SetCommune creates a new commune on the StressTest
func (st *StressTest) SetCommune(name string) chan<- string {
com := newCommune(10)
st.communes[name] = com
return com.ch
}
// GetPoint is called by a QueryStatement and retrieves a point sent by the associated InsertStatement
func (st *StressTest) GetPoint(name, precision string) models.Point {
p := st.communes[name].point(precision)
// Function needs to return a point. Panic if it doesn't
if p == nil {
log.Fatal("Commune not returning point")
}
return p
}

View File

@@ -0,0 +1,57 @@
package stressClient
import (
"testing"
)
func TestCommunePoint(t *testing.T) {
comm := newCommune(5)
pt := "write,tag=tagVal fooField=5 1460912595"
comm.ch <- pt
point := comm.point("s")
if string(point.Name()) != "write" {
t.Errorf("expected: write\ngot: %v", string(point.Name()))
}
if point.Tags().GetString("tag") != "tagVal" {
t.Errorf("expected: tagVal\ngot: %v", point.Tags().GetString("tag"))
}
fields, err := point.Fields()
if err != nil {
t.Fatal(err)
}
if int(fields["fooField"].(float64)) != 5 {
t.Errorf("expected: 5\ngot: %v\n", fields["fooField"])
}
// Make sure commune returns the prev point
comm.ch <- ""
point = comm.point("s")
if string(point.Name()) != "write" {
t.Errorf("expected: write\ngot: %v", string(point.Name()))
}
if point.Tags().GetString("tag") != "tagVal" {
t.Errorf("expected: tagVal\ngot: %v", point.Tags().GetString("tag"))
}
if int(fields["fooField"].(float64)) != 5 {
t.Errorf("expected: 5\ngot: %v\n", fields["fooField"])
}
}
func TestSetCommune(t *testing.T) {
sf, _, _ := NewTestStressTest()
ch := sf.SetCommune("foo_name")
ch <- "write,tag=tagVal fooField=5 1460912595"
pt := sf.GetPoint("foo_name", "s")
if string(pt.Name()) != "write" {
t.Errorf("expected: write\ngot: %v", string(pt.Name()))
}
if pt.Tags().GetString("tag") != "tagVal" {
t.Errorf("expected: tagVal\ngot: %v", pt.Tags().GetString("tag"))
}
fields, err := pt.Fields()
if err != nil {
t.Fatal(err)
}
if int(fields["fooField"].(float64)) != 5 {
t.Errorf("expected: 5\ngot: %v\n", fields["fooField"])
}
}

View File

@@ -0,0 +1,19 @@
package stressClient
// Directive is a struct to enable communication between SetStatements and the stressClient backend
// Directives change state for the stress test
type Directive struct {
Property string
Value string
Tracer *Tracer
}
// NewDirective creates a new instance of a Directive with the appropriate state variable to change
func NewDirective(property string, value string, tracer *Tracer) Directive {
d := Directive{
Property: property,
Value: value,
Tracer: tracer,
}
return d
}

View File

@@ -0,0 +1,20 @@
package stressClient
import (
"testing"
)
func TestNewDirective(t *testing.T) {
tr := NewTracer(map[string]string{})
prop := "foo_prop"
val := "foo_value"
dir := NewDirective(prop, val, tr)
got := dir.Property
if prop != got {
t.Errorf("expected: %v\ngot: %v\n", prop, got)
}
got = dir.Value
if val != got {
t.Errorf("expected: %v\ngot: %v\n", val, got)
}
}

View File

@@ -0,0 +1,22 @@
package stressClient
// Package is a struct to enable communication between InsertStatements, QueryStatements and InfluxQLStatements and the stressClient backend
// Packages carry either writes or queries in the []byte that makes up the Body
type Package struct {
T Type
Body []byte
StatementID string
Tracer *Tracer
}
// NewPackage creates a new package with the appropriate payload
func NewPackage(t Type, body []byte, statementID string, tracer *Tracer) Package {
p := Package{
T: t,
Body: body,
StatementID: statementID,
Tracer: tracer,
}
return p
}

View File

@@ -0,0 +1,16 @@
package stressClient
import (
"testing"
)
func TestNewPackage(t *testing.T) {
qry := []byte("SELECT * FROM foo")
statementID := "foo_id"
tr := NewTracer(map[string]string{})
pkg := NewPackage(Query, qry, statementID, tr)
got := string(pkg.Body)
if string(qry) != got {
t.Errorf("expected: %v\ngot: %v\n", qry, got)
}
}

View File

@@ -0,0 +1,95 @@
package stressClient
import (
"log"
"strconv"
"time"
influx "github.com/influxdata/influxdb/client/v2"
)
// reporting.go contains functions to emit tags and points from various parts of stressClient
// These points are then written to the ("_%v", sf.TestName) database
// These are the tags that stressClient adds to any response points
func (sc *stressClient) tags(statementID string) map[string]string {
tags := map[string]string{
"number_targets": fmtInt(len(sc.addresses)),
"precision": sc.precision,
"writers": fmtInt(sc.wconc),
"readers": fmtInt(sc.qconc),
"test_id": sc.testID,
"statement_id": statementID,
"write_interval": sc.wdelay,
"query_interval": sc.qdelay,
}
return tags
}
// These are the tags that the StressTest adds to any response points
func (st *StressTest) tags() map[string]string {
tags := map[string]string{
"precision": st.Precision,
"batch_size": fmtInt(st.BatchSize),
}
return tags
}
// This function makes a *client.Point for reporting on writes
func (sc *stressClient) writePoint(retries int, statementID string, statusCode int, responseTime time.Duration, addedTags map[string]string, writeBytes int) *influx.Point {
tags := sumTags(sc.tags(statementID), addedTags)
fields := map[string]interface{}{
"status_code": statusCode,
"response_time_ns": responseTime.Nanoseconds(),
"num_bytes": writeBytes,
}
point, err := influx.NewPoint("write", tags, fields, time.Now())
if err != nil {
log.Fatalf("Error creating write results point\n error: %v\n", err)
}
return point
}
// This function makes a *client.Point for reporting on queries
func (sc *stressClient) queryPoint(statementID string, body []byte, statusCode int, responseTime time.Duration, addedTags map[string]string) *influx.Point {
tags := sumTags(sc.tags(statementID), addedTags)
fields := map[string]interface{}{
"status_code": statusCode,
"num_bytes": len(body),
"response_time_ns": responseTime.Nanoseconds(),
}
point, err := influx.NewPoint("query", tags, fields, time.Now())
if err != nil {
log.Fatalf("Error creating query results point\n error: %v\n", err)
}
return point
}
// Adds two map[string]string together
func sumTags(tags1, tags2 map[string]string) map[string]string {
tags := make(map[string]string)
// Add all tags from first map to return map
for k, v := range tags1 {
tags[k] = v
}
// Add all tags from second map to return map
for k, v := range tags2 {
tags[k] = v
}
return tags
}
// Turns an int into a string
func fmtInt(i int) string {
return strconv.FormatInt(int64(i), 10)
}

View File

@@ -0,0 +1,100 @@
package stressClient
import (
"testing"
"time"
)
func TestNewStressClientTags(t *testing.T) {
pe, _, _ := newTestStressClient("localhost:8086")
tags := pe.tags("foo_id")
expected := fmtInt(len(pe.addresses))
got := tags["number_targets"]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
expected = pe.precision
got = tags["precision"]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
expected = pe.wdelay
got = tags["write_interval"]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
expected = "foo_id"
got = tags["statement_id"]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func TestNewStressTestTags(t *testing.T) {
sf, _, _ := NewTestStressTest()
tags := sf.tags()
expected := sf.Precision
got := tags["precision"]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
expected = fmtInt(sf.BatchSize)
got = tags["batch_size"]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func TestWritePoint(t *testing.T) {
pe, _, _ := newTestStressClient("localhost:8086")
statementID := "foo_id"
responseCode := 200
responseTime := time.Duration(10 * time.Millisecond)
addedTags := map[string]string{"foo_tag": "foo_tag_value"}
writeBytes := 28051
pt := pe.writePoint(1, statementID, responseCode, responseTime, addedTags, writeBytes)
got := pt.Tags()["statement_id"]
if statementID != got {
t.Errorf("expected: %v\ngot: %v\n", statementID, got)
}
fields, err := pt.Fields()
if err != nil {
t.Fatal(err)
}
got2 := int(fields["status_code"].(int64))
if responseCode != got2 {
t.Errorf("expected: %v\ngot: %v\n", responseCode, got2)
}
expected := "write"
got = pt.Name()
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}
func TestQueryPoint(t *testing.T) {
pe, _, _ := newTestStressClient("localhost:8086")
statementID := "foo_id"
responseCode := 200
body := []byte{12}
responseTime := time.Duration(10 * time.Millisecond)
addedTags := map[string]string{"foo_tag": "foo_tag_value"}
pt := pe.queryPoint(statementID, body, responseCode, responseTime, addedTags)
got := pt.Tags()["statement_id"]
if statementID != got {
t.Errorf("expected: %v\ngot: %v\n", statementID, got)
}
fields, err := pt.Fields()
if err != nil {
t.Fatal(err)
}
got2 := int(fields["status_code"].(int64))
if responseCode != got2 {
t.Errorf("expected: %v\ngot: %v\n", responseCode, got2)
}
expected := "query"
got = pt.Name()
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}

View File

@@ -0,0 +1,50 @@
package stressClient
import (
"log"
influx "github.com/influxdata/influxdb/client/v2"
)
// Response holds data scraped from InfluxDB HTTP responses turned into a *influx.Point for reporting
// See reporting.go for more information
// The Tracer contains a wait group sent from the statement. It needs to be decremented when the Response is consumed
type Response struct {
Point *influx.Point
Tracer *Tracer
}
// NewResponse creates a new instance of Response
func NewResponse(pt *influx.Point, tr *Tracer) Response {
return Response{
Point: pt,
Tracer: tr,
}
}
// AddTags adds additional tags to the point held in Response and returns the point
func (resp Response) AddTags(newTags map[string]string) (*influx.Point, error) {
// Pull off the current tags
tags := resp.Point.Tags()
// Add the new tags to the current tags
for tag, tagValue := range newTags {
tags[tag] = tagValue
}
// Make a new point
fields, err := resp.Point.Fields()
if err != nil {
return nil, err
}
pt, err := influx.NewPoint(resp.Point.Name(), tags, fields, resp.Point.Time())
// panic on error
if err != nil {
log.Fatalf("Error adding tags to response point\n point: %v\n tags:%v\n error: %v\n", resp.Point, newTags, err)
}
return pt, nil
}

View File

@@ -0,0 +1,20 @@
package stressClient
import (
"testing"
)
func TestNewResponse(t *testing.T) {
pt := NewBlankTestPoint()
tr := NewTracer(map[string]string{})
r := NewResponse(pt, tr)
expected := "another_tag_value"
test, err := r.AddTags(map[string]string{"another_tag": "another_tag_value"})
if err != nil {
t.Fatal(err)
}
got := test.Tags()["another_tag"]
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}

View File

@@ -0,0 +1,175 @@
package stressClient
import (
"fmt"
"log"
"sync"
influx "github.com/influxdata/influxdb/client/v2"
)
// NewStressTest creates the backend for the stress test
func NewStressTest() *StressTest {
packageCh := make(chan Package, 0)
directiveCh := make(chan Directive, 0)
responseCh := make(chan Response, 0)
clnt, _ := influx.NewHTTPClient(influx.HTTPConfig{
Addr: fmt.Sprintf("http://%v/", "localhost:8086"),
})
s := &StressTest{
TestDB: "_stressTest",
Precision: "s",
StartDate: "2016-01-02",
BatchSize: 5000,
packageChan: packageCh,
directiveChan: directiveCh,
ResultsClient: clnt,
ResultsChan: responseCh,
communes: make(map[string]*commune),
TestID: randStr(10),
}
// Start the client service
startStressClient(packageCh, directiveCh, responseCh, s.TestID)
// Listen for Results coming in
s.resultsListen()
return s
}
// NewTestStressTest returns a StressTest to be used for testing Statements
func NewTestStressTest() (*StressTest, chan Package, chan Directive) {
packageCh := make(chan Package, 0)
directiveCh := make(chan Directive, 0)
s := &StressTest{
TestDB: "_stressTest",
Precision: "s",
StartDate: "2016-01-02",
BatchSize: 5000,
directiveChan: directiveCh,
packageChan: packageCh,
communes: make(map[string]*commune),
TestID: randStr(10),
}
return s, packageCh, directiveCh
}
// The StressTest is the Statement facing API that consumes Statement output and coordinates the test results
type StressTest struct {
TestID string
TestDB string
Precision string
StartDate string
BatchSize int
sync.WaitGroup
sync.Mutex
packageChan chan<- Package
directiveChan chan<- Directive
ResultsChan chan Response
communes map[string]*commune
ResultsClient influx.Client
}
// SendPackage is the public facing API for to send Queries and Points
func (st *StressTest) SendPackage(p Package) {
st.packageChan <- p
}
// SendDirective is the public facing API to set state variables in the test
func (st *StressTest) SendDirective(d Directive) {
st.directiveChan <- d
}
// Starts a go routine that listens for Results
func (st *StressTest) resultsListen() {
st.createDatabase(st.TestDB)
go func() {
bp := st.NewResultsPointBatch()
for resp := range st.ResultsChan {
switch resp.Point.Name() {
case "done":
st.ResultsClient.Write(bp)
resp.Tracer.Done()
default:
// Add the StressTest tags
pt, err := resp.AddTags(st.tags())
if err != nil {
panic(err)
}
// Add the point to the batch
bp = st.batcher(pt, bp)
resp.Tracer.Done()
}
}
}()
}
// NewResultsPointBatch creates a new batch of points for the results
func (st *StressTest) NewResultsPointBatch() influx.BatchPoints {
bp, _ := influx.NewBatchPoints(influx.BatchPointsConfig{
Database: st.TestDB,
Precision: "ns",
})
return bp
}
// Batches incoming Result.Point and sends them if the batch reaches 5k in size
func (st *StressTest) batcher(pt *influx.Point, bp influx.BatchPoints) influx.BatchPoints {
if len(bp.Points()) <= 5000 {
bp.AddPoint(pt)
} else {
err := st.ResultsClient.Write(bp)
if err != nil {
log.Fatalf("Error writing performance stats\n error: %v\n", err)
}
bp = st.NewResultsPointBatch()
}
return bp
}
// Convinence database creation function
func (st *StressTest) createDatabase(db string) {
query := fmt.Sprintf("CREATE DATABASE %v", db)
res, err := st.ResultsClient.Query(influx.Query{Command: query})
if err != nil {
log.Fatalf("error: no running influx server at localhost:8086")
if res.Error() != nil {
log.Fatalf("error: no running influx server at localhost:8086")
}
}
}
// GetStatementResults is a convinence function for fetching all results given a StatementID
func (st *StressTest) GetStatementResults(sID, t string) (res []influx.Result) {
qryStr := fmt.Sprintf(`SELECT * FROM "%v" WHERE statement_id = '%v'`, t, sID)
return st.queryTestResults(qryStr)
}
// Runs given qry on the test results database and returns the results or nil in case of error
func (st *StressTest) queryTestResults(qry string) (res []influx.Result) {
response, err := st.ResultsClient.Query(influx.Query{Command: qry, Database: st.TestDB})
if err == nil {
if response.Error() != nil {
log.Fatalf("Error sending results query\n error: %v\n", response.Error())
}
}
if response.Results[0].Series == nil {
return nil
}
return response.Results
}

View File

@@ -0,0 +1,32 @@
package stressClient
import (
"testing"
"time"
influx "github.com/influxdata/influxdb/client/v2"
)
func NewBlankTestPoint() *influx.Point {
meas := "measurement"
tags := map[string]string{"fooTag": "fooTagValue"}
fields := map[string]interface{}{"value": 5920}
utc, _ := time.LoadLocation("UTC")
timestamp := time.Date(2016, time.Month(4), 20, 0, 0, 0, 0, utc)
pt, _ := influx.NewPoint(meas, tags, fields, timestamp)
return pt
}
func TestStressTestBatcher(t *testing.T) {
sf, _, _ := NewTestStressTest()
bpconf := influx.BatchPointsConfig{
Database: sf.TestDB,
Precision: "ns",
}
bp, _ := influx.NewBatchPoints(bpconf)
pt := NewBlankTestPoint()
bp = sf.batcher(pt, bp)
if len(bp.Points()) != 1 {
t.Fail()
}
}

View File

@@ -0,0 +1,175 @@
package stressClient
import (
"strings"
"sync"
)
// Type refers to the different Package types
type Type int
// There are two package types, Write and Query
const (
Write Type = iota
Query
)
func startStressClient(packageCh <-chan Package, directiveCh <-chan Directive, responseCh chan<- Response, testID string) {
c := &stressClient{
testID: testID,
addresses: []string{"localhost:8086"},
ssl: false,
username: "",
password: "",
precision: "ns",
database: "stress",
startDate: "2016-01-01",
qdelay: "0s",
wdelay: "0s",
wconc: 10,
qconc: 5,
packageChan: packageCh,
directiveChan: directiveCh,
responseChan: responseCh,
}
// start listening for writes and queries
go c.listen()
// start listening for state changes
go c.directiveListen()
}
type stressClient struct {
testID string
// State for the Stress Test
addresses []string
precision string
startDate string
database string
wdelay string
qdelay string
username string
password string
ssl bool
// Channels from statements
packageChan <-chan Package
directiveChan <-chan Directive
// Response channel
responseChan chan<- Response
// Concurrency utilities
sync.WaitGroup
sync.Mutex
// Concurrency Limit for Writes and Reads
wconc int
qconc int
// Manage Read and Write concurrency seperately
wc *ConcurrencyLimiter
rc *ConcurrencyLimiter
}
// NewTestStressClient returns a blank stressClient for testing
func newTestStressClient(url string) (*stressClient, chan Directive, chan Package) {
pkgChan := make(chan Package)
dirChan := make(chan Directive)
pe := &stressClient{
testID: "foo_id",
addresses: []string{url},
precision: "s",
startDate: "2016-01-01",
database: "fooDatabase",
wdelay: "50ms",
qdelay: "50ms",
ssl: false,
username: "",
password: "",
wconc: 5,
qconc: 5,
packageChan: pkgChan,
directiveChan: dirChan,
wc: NewConcurrencyLimiter(1),
rc: NewConcurrencyLimiter(1),
}
return pe, dirChan, pkgChan
}
// stressClient starts listening for Packages on the main channel
func (sc *stressClient) listen() {
defer sc.Wait()
sc.wc = NewConcurrencyLimiter(sc.wconc)
sc.rc = NewConcurrencyLimiter(sc.qconc)
l := NewConcurrencyLimiter((sc.wconc + sc.qconc) * 2)
counter := 0
for p := range sc.packageChan {
l.Increment()
go func(p Package) {
defer l.Decrement()
switch p.T {
case Write:
sc.spinOffWritePackage(p, (counter % len(sc.addresses)))
case Query:
sc.spinOffQueryPackage(p, (counter % len(sc.addresses)))
}
}(p)
counter++
}
}
// Set handles all SET requests for test state
func (sc *stressClient) directiveListen() {
for d := range sc.directiveChan {
sc.Lock()
switch d.Property {
// addresses is a []string of target InfluxDB instance(s) for the test
// comes in as a "|" seperated array of addresses
case "addresses":
addr := strings.Split(d.Value, "|")
sc.addresses = addr
// percison is the write precision for InfluxDB
case "precision":
sc.precision = d.Value
// writeinterval is an optional delay between batches
case "writeinterval":
sc.wdelay = d.Value
// queryinterval is an optional delay between the batches
case "queryinterval":
sc.qdelay = d.Value
// database is the InfluxDB database to target for both writes and queries
case "database":
sc.database = d.Value
// username for the target database
case "username":
sc.username = d.Value
// username for the target database
case "password":
sc.password = d.Value
// use https if sent true
case "ssl":
if d.Value == "true" {
sc.ssl = true
}
// concurrency is the number concurrent writers to the database
case "writeconcurrency":
conc := parseInt(d.Value)
sc.wconc = conc
sc.wc.NewMax(conc)
// concurrentqueries is the number of concurrent queriers database
case "queryconcurrency":
conc := parseInt(d.Value)
sc.qconc = conc
sc.rc.NewMax(conc)
}
d.Tracer.Done()
sc.Unlock()
}
}

View File

@@ -0,0 +1,74 @@
package stressClient
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"net/url"
"time"
)
func (sc *stressClient) spinOffQueryPackage(p Package, serv int) {
sc.Add(1)
sc.rc.Increment()
go func() {
// Send the query
sc.prepareQuerySend(p, serv)
sc.Done()
sc.rc.Decrement()
}()
}
// Prepares to send the GET request
func (sc *stressClient) prepareQuerySend(p Package, serv int) {
var queryTemplate string
if sc.ssl {
queryTemplate = "https://%v/query?db=%v&q=%v&u=%v&p=%v"
} else {
queryTemplate = "http://%v/query?db=%v&q=%v&u=%v&p=%v"
}
queryURL := fmt.Sprintf(queryTemplate, sc.addresses[serv], sc.database, url.QueryEscape(string(p.Body)), sc.username, sc.password)
// Send the query
sc.makeGet(queryURL, p.StatementID, p.Tracer)
// Query Interval enforcement
qi, _ := time.ParseDuration(sc.qdelay)
time.Sleep(qi)
}
// Sends the GET request, reads it, and handles errors
func (sc *stressClient) makeGet(addr, statementID string, tr *Tracer) {
// Make GET request
t := time.Now()
resp, err := http.Get(addr)
elapsed := time.Since(t)
if err != nil {
log.Printf("Error making Query HTTP request\n error: %v\n", err)
}
defer resp.Body.Close()
// Read body and return it for Reporting
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalf("Error reading Query response body\n error: %v\n", err)
}
if resp.StatusCode != 200 {
log.Printf("Query returned non 200 status\n status: %v\n error: %v\n", resp.StatusCode, string(body))
}
// Send the response
sc.responseChan <- NewResponse(sc.queryPoint(statementID, body, resp.StatusCode, elapsed, tr.Tags), tr)
}
func success(r *http.Response) bool {
// ADD success for tcp, udp, etc
return r != nil && (r.StatusCode == 204 || r.StatusCode == 200)
}

View File

@@ -0,0 +1,112 @@
package stressClient
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"net/http"
"time"
)
// ###############################################
// A selection of methods to manage the write path
// ###############################################
// Packages up Package from channel in goroutine
func (sc *stressClient) spinOffWritePackage(p Package, serv int) {
sc.Add(1)
sc.wc.Increment()
go func() {
sc.retry(p, time.Duration(time.Nanosecond), serv)
sc.Done()
sc.wc.Decrement()
}()
}
// Implements backoff and retry logic for 500 responses
func (sc *stressClient) retry(p Package, backoff time.Duration, serv int) {
// Set Backoff Interval to 500ms
backoffInterval := time.Duration(500 * time.Millisecond)
// Arithmetic backoff for kicks
bo := backoff + backoffInterval
// Make the write request
resp, elapsed, err := sc.prepareWrite(p.Body, serv)
// Find number of times request has been retried
numBackoffs := int(bo/backoffInterval) - 1
// On 500 responses, resp == nil. This logic keeps program for panicing
var statusCode int
if resp == nil {
statusCode = 500
} else {
statusCode = resp.StatusCode
}
// Make a point for reporting
point := sc.writePoint(numBackoffs, p.StatementID, statusCode, elapsed, p.Tracer.Tags, len(p.Body))
// Send the Response(point, tracer)
sc.responseChan <- NewResponse(point, p.Tracer)
// BatchInterval enforcement
bi, _ := time.ParseDuration(sc.wdelay)
time.Sleep(bi)
// Retry if the statusCode was not 204 or the err != nil
if !(statusCode == 204) || err != nil {
// Increment the *Tracer waitgroup if we are going to retry the request
p.Tracer.Add(1)
// Log the error if there is one
fmt.Println(err)
// Backoff enforcement
time.Sleep(bo)
sc.retry(p, bo, serv)
}
}
// Prepares to send the POST request
func (sc *stressClient) prepareWrite(points []byte, serv int) (*http.Response, time.Duration, error) {
// Construct address string
var writeTemplate string
if sc.ssl {
writeTemplate = "https://%v/write?db=%v&precision=%v&u=%v&p=%v"
} else {
writeTemplate = "http://%v/write?db=%v&precision=%v&u=%v&p=%v"
}
address := fmt.Sprintf(writeTemplate, sc.addresses[serv], sc.database, sc.precision, sc.username, sc.password)
// Start timer
t := time.Now()
resp, err := makePost(address, bytes.NewBuffer(points))
elapsed := time.Since(t)
return resp, elapsed, err
}
// Send POST request, read it, and handle errors
func makePost(url string, points io.Reader) (*http.Response, error) {
resp, err := http.Post(url, "text/plain", points)
if err != nil {
return resp, fmt.Errorf("Error making write POST request\n error: %v\n url: %v\n", err, url)
}
body, _ := ioutil.ReadAll(resp.Body)
if resp.StatusCode != 204 {
return resp, fmt.Errorf("Write returned non-204 status code\n StatusCode: %v\n InfluxDB Error: %v\n", resp.StatusCode, string(body))
}
resp.Body.Close()
return resp, nil
}

View File

@@ -0,0 +1,19 @@
package stressClient
import (
"sync"
)
// The Tracer carrys tags and a waitgroup from the statements through the package life cycle
type Tracer struct {
Tags map[string]string
sync.WaitGroup
}
// NewTracer returns a Tracer with tags attached
func NewTracer(tags map[string]string) *Tracer {
return &Tracer{
Tags: tags,
}
}

View File

@@ -0,0 +1,17 @@
package stressClient
import (
"testing"
)
func TestNewTracer(t *testing.T) {
tagValue := "foo_tag_value"
tracer := NewTracer(map[string]string{"foo_tag_key": tagValue})
got := tracer.Tags["foo_tag_key"]
if got != tagValue {
t.Errorf("expected: %v\ngot: %v", tagValue, got)
}
tracer.Add(1)
tracer.Done()
tracer.Wait()
}

View File

@@ -0,0 +1,89 @@
package stressClient
import (
"crypto/rand"
"fmt"
"log"
"strconv"
"sync"
)
// ###########################################
// ConcurrencyLimiter and associated methods #
// ###########################################
// ConcurrencyLimiter ensures that no more than a specified
// max number of goroutines are running.
type ConcurrencyLimiter struct {
inc chan chan struct{}
dec chan struct{}
max int
count int
sync.Mutex
}
// NewConcurrencyLimiter returns a configured limiter that will
// ensure that calls to Increment will block if the max is hit.
func NewConcurrencyLimiter(max int) *ConcurrencyLimiter {
c := &ConcurrencyLimiter{
inc: make(chan chan struct{}),
dec: make(chan struct{}, max),
max: max,
}
go c.handleLimits()
return c
}
// Increment will increase the count of running goroutines by 1.
// if the number is currently at the max, the call to Increment
// will block until another goroutine decrements.
func (c *ConcurrencyLimiter) Increment() {
r := make(chan struct{})
c.inc <- r
<-r
}
// Decrement will reduce the count of running goroutines by 1
func (c *ConcurrencyLimiter) Decrement() {
c.dec <- struct{}{}
}
// NewMax resets the max of a ConcurrencyLimiter.
func (c *ConcurrencyLimiter) NewMax(i int) {
c.Lock()
defer c.Unlock()
c.max = i
}
// handleLimits runs in a goroutine to manage the count of
// running goroutines.
func (c *ConcurrencyLimiter) handleLimits() {
for {
r := <-c.inc
c.Lock()
if c.count >= c.max {
<-c.dec
c.count--
}
c.Unlock()
c.count++
r <- struct{}{}
}
}
// Utility interger parsing function
func parseInt(s string) int {
i, err := strconv.ParseInt(s, 10, 64)
if err != nil {
log.Fatalf("Error parsing integer:\n String: %v\n Error: %v\n", s, err)
}
return int(i)
}
// Utility for making random strings of length n
func randStr(n int) string {
b := make([]byte, n/2)
_, _ = rand.Read(b)
return fmt.Sprintf("%x", b)
}

View File

@@ -0,0 +1,158 @@
package stressql
import (
"bufio"
"bytes"
"io"
"log"
"os"
"strings"
"github.com/influxdata/influxdb/influxql"
"github.com/influxdata/influxdb/stress/v2/statement"
stressql "github.com/influxdata/influxdb/stress/v2/stressql/statement"
)
// Token represents a lexical token.
type Token int
// These are the lexical tokens used by the file parser
const (
ILLEGAL Token = iota
EOF
STATEMENT
BREAK
)
var eof = rune(0)
func check(e error) {
if e != nil {
log.Fatal(e)
}
}
func isNewline(r rune) bool {
return r == '\n'
}
// Scanner scans the file and tokenizes the raw text
type Scanner struct {
r *bufio.Reader
}
// NewScanner returns a Scanner
func NewScanner(r io.Reader) *Scanner {
return &Scanner{r: bufio.NewReader(r)}
}
func (s *Scanner) read() rune {
ch, _, err := s.r.ReadRune()
if err != nil {
return eof
}
return ch
}
func (s *Scanner) unread() { _ = s.r.UnreadRune() }
func (s *Scanner) peek() rune {
ch := s.read()
s.unread()
return ch
}
// Scan moves the Scanner forward one character
func (s *Scanner) Scan() (tok Token, lit string) {
ch := s.read()
if isNewline(ch) {
s.unread()
return s.scanNewlines()
} else if ch == eof {
return EOF, ""
} else {
s.unread()
return s.scanStatements()
}
// golint marks as unreachable code
// return ILLEGAL, string(ch)
}
func (s *Scanner) scanNewlines() (tok Token, lit string) {
var buf bytes.Buffer
buf.WriteRune(s.read())
for {
if ch := s.read(); ch == eof {
break
} else if !isNewline(ch) {
s.unread()
break
} else {
buf.WriteRune(ch)
}
}
return BREAK, buf.String()
}
func (s *Scanner) scanStatements() (tok Token, lit string) {
var buf bytes.Buffer
buf.WriteRune(s.read())
for {
if ch := s.read(); ch == eof {
break
} else if isNewline(ch) && isNewline(s.peek()) {
s.unread()
break
} else if isNewline(ch) {
s.unread()
buf.WriteRune(ch)
} else {
buf.WriteRune(ch)
}
}
return STATEMENT, buf.String()
}
// ParseStatements takes a configFile and returns a slice of Statements
func ParseStatements(file string) ([]statement.Statement, error) {
seq := []statement.Statement{}
f, err := os.Open(file)
check(err)
s := NewScanner(f)
for {
t, l := s.Scan()
if t == EOF {
break
}
_, err := influxql.ParseStatement(l)
if err == nil {
seq = append(seq, &statement.InfluxqlStatement{Query: l, StatementID: stressql.RandStr(10)})
} else if t == BREAK {
continue
} else {
f := strings.NewReader(l)
p := stressql.NewParser(f)
s, err := p.Parse()
if err != nil {
return nil, err
}
seq = append(seq, s)
}
}
f.Close()
return seq, nil
}

View File

@@ -0,0 +1,16 @@
package stressql
import "testing"
// Pulls the default configFile and makes sure it parses
func TestParseStatements(t *testing.T) {
stmts, err := ParseStatements("../iql/file.iql")
if err != nil {
t.Error(err)
}
expected := 15
got := len(stmts)
if expected != got {
t.Errorf("expected: %v\ngot: %v\n", expected, got)
}
}

View File

@@ -0,0 +1,682 @@
package statement
import (
"bufio"
"bytes"
"crypto/rand"
"fmt"
"io"
"log"
"strconv"
"strings"
"time"
"github.com/influxdata/influxdb/stress/v2/statement"
)
// Token represents a lexical token.
type Token int
// The following tokens represent the different values in the AST that make up stressql
const (
ILLEGAL Token = iota
EOF
WS
literalBeg
// IDENT and the following are InfluxQL literal tokens.
IDENT // main
NUMBER // 12345.67
DURATIONVAL // 13h
STRING // "abc"
BADSTRING // "abc
TEMPLATEVAR // %f
literalEnd
COMMA // ,
LPAREN // (
RPAREN // )
LBRACKET // [
RBRACKET // ]
PIPE // |
PERIOD // .
keywordBeg
SET
USE
QUERY
INSERT
GO
DO
WAIT
STR
INT
FLOAT
EXEC
keywordEnd
)
var eof = rune(1)
func isWhitespace(ch rune) bool { return ch == ' ' || ch == '\t' || ch == '\n' }
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isLetter(ch rune) bool {
return (ch >= 'a' && ch <= 'z') || (ch >= 'A' && ch <= 'Z') || (ch == '@')
}
// Scanner scans over the file and converts the raw text into tokens
type Scanner struct {
r *bufio.Reader
}
// NewScanner returns a Scanner
func NewScanner(r io.Reader) *Scanner {
return &Scanner{r: bufio.NewReader(r)}
}
func (s *Scanner) read() rune {
ch, _, err := s.r.ReadRune()
if err != nil {
return eof
}
return ch
}
func (s *Scanner) unread() { _ = s.r.UnreadRune() }
// Scan moves to the next character in the file and returns a tokenized version as well as the literal
func (s *Scanner) Scan() (tok Token, lit string) {
ch := s.read()
if isWhitespace(ch) {
s.unread()
return s.scanWhitespace()
} else if isLetter(ch) {
s.unread()
return s.scanIdent()
} else if isDigit(ch) {
s.unread()
return s.scanNumber()
}
switch ch {
case eof:
return EOF, ""
case '"':
s.unread()
return s.scanIdent()
case '%':
s.unread()
return s.scanTemplateVar()
case ',':
return COMMA, ","
case '.':
return PERIOD, "."
case '(':
return LPAREN, "("
case ')':
return RPAREN, ")"
case '[':
return LBRACKET, "["
case ']':
return RBRACKET, "]"
case '|':
return PIPE, "|"
}
return ILLEGAL, string(ch)
}
func (s *Scanner) scanWhitespace() (tok Token, lit string) {
var buf bytes.Buffer
buf.WriteRune(s.read())
for {
if ch := s.read(); ch == eof {
break
} else if !isWhitespace(ch) {
s.unread()
break
} else {
buf.WriteRune(ch)
}
}
return WS, buf.String()
}
func (s *Scanner) scanIdent() (tok Token, lit string) {
var buf bytes.Buffer
buf.WriteRune(s.read())
for {
if ch := s.read(); ch == eof {
break
} else if !isLetter(ch) && !isDigit(ch) && ch != '_' && ch != ':' && ch != '=' && ch != '-' {
s.unread()
break
} else {
_, _ = buf.WriteRune(ch)
}
}
switch strings.ToUpper(buf.String()) {
case "SET":
return SET, buf.String()
case "USE":
return USE, buf.String()
case "QUERY":
return QUERY, buf.String()
case "INSERT":
return INSERT, buf.String()
case "EXEC":
return EXEC, buf.String()
case "WAIT":
return WAIT, buf.String()
case "GO":
return GO, buf.String()
case "DO":
return DO, buf.String()
case "STR":
return STR, buf.String()
case "FLOAT":
return FLOAT, buf.String()
case "INT":
return INT, buf.String()
}
return IDENT, buf.String()
}
func (s *Scanner) scanTemplateVar() (tok Token, lit string) {
var buf bytes.Buffer
buf.WriteRune(s.read())
buf.WriteRune(s.read())
return TEMPLATEVAR, buf.String()
}
func (s *Scanner) scanNumber() (tok Token, lit string) {
var buf bytes.Buffer
buf.WriteRune(s.read())
for {
if ch := s.read(); ch == eof {
break
} else if ch == 'n' || ch == 's' || ch == 'm' {
_, _ = buf.WriteRune(ch)
return DURATIONVAL, buf.String()
} else if !isDigit(ch) {
s.unread()
break
} else {
_, _ = buf.WriteRune(ch)
}
}
return NUMBER, buf.String()
}
/////////////////////////////////
// PARSER ///////////////////////
/////////////////////////////////
// Parser turns the file from raw text into an AST
type Parser struct {
s *Scanner
buf struct {
tok Token
lit string
n int
}
}
// NewParser creates a new Parser
func NewParser(r io.Reader) *Parser {
return &Parser{s: NewScanner(r)}
}
// Parse returns a Statement
func (p *Parser) Parse() (statement.Statement, error) {
tok, lit := p.scanIgnoreWhitespace()
switch tok {
case QUERY:
p.unscan()
return p.ParseQueryStatement()
case INSERT:
p.unscan()
return p.ParseInsertStatement()
case EXEC:
p.unscan()
return p.ParseExecStatement()
case SET:
p.unscan()
return p.ParseSetStatement()
case GO:
p.unscan()
return p.ParseGoStatement()
case WAIT:
p.unscan()
return p.ParseWaitStatement()
}
return nil, fmt.Errorf("Improper syntax\n unknown token found between statements, token: %v\n", lit)
}
// ParseQueryStatement returns a QueryStatement
func (p *Parser) ParseQueryStatement() (*statement.QueryStatement, error) {
stmt := &statement.QueryStatement{
StatementID: RandStr(10),
}
if tok, lit := p.scanIgnoreWhitespace(); tok != QUERY {
return nil, fmt.Errorf("Error parsing Query Statement\n Expected: QUERY\n Found: %v\n", lit)
}
tok, lit := p.scanIgnoreWhitespace()
if tok != IDENT {
return nil, fmt.Errorf("Error parsing Query Statement\n Expected: IDENT\n Found: %v\n", lit)
}
stmt.Name = lit
for {
tok, lit := p.scan()
if tok == TEMPLATEVAR {
stmt.TemplateString += "%v"
stmt.Args = append(stmt.Args, lit)
} else if tok == DO {
tok, lit := p.scanIgnoreWhitespace()
if tok != NUMBER {
return nil, fmt.Errorf("Error parsing Query Statement\n Expected: NUMBER\n Found: %v\n", lit)
}
// Parse out the integer
i, err := strconv.ParseInt(lit, 10, 64)
if err != nil {
log.Fatalf("Error parsing integer in Query Statement:\n string: %v\n error: %v\n", lit, err)
}
stmt.Count = int(i)
break
} else if tok == WS && lit == "\n" {
continue
} else {
stmt.TemplateString += lit
}
}
return stmt, nil
}
// ParseInsertStatement returns a InsertStatement
func (p *Parser) ParseInsertStatement() (*statement.InsertStatement, error) {
// Initialize the InsertStatement with a statementId
stmt := &statement.InsertStatement{
StatementID: RandStr(10),
}
// If the first word is INSERT
if tok, lit := p.scanIgnoreWhitespace(); tok != INSERT {
return nil, fmt.Errorf("Error parsing Insert Statement\n Expected: INSERT\n Found: %v\n", lit)
}
// Next should come the NAME of the statement. It is IDENT type
tok, lit := p.scanIgnoreWhitespace()
if tok != IDENT {
return nil, fmt.Errorf("Error parsing Insert Statement\n Expected: IDENT\n Found: %v\n", lit)
}
// Set the Name
stmt.Name = lit
// Next char should be a newline
tok, lit = p.scan()
if tok != WS {
return nil, fmt.Errorf("Error parsing Insert Statement\n Expected: WS\n Found: %v\n", lit)
}
// We are now scanning the tags line
var prev Token
inTags := true
for {
// Start for loop by scanning
tok, lit = p.scan()
// If scaned is WS then we are just entering tags or leaving tags or fields
if tok == WS {
// If previous is COMMA then we are leaving measurement, continue
if prev == COMMA {
continue
}
// Otherwise we need to add a space to the template string and we are out of tags
stmt.TemplateString += " "
inTags = false
} else if tok == LBRACKET {
// If we are still inTags and there is a LBRACKET we are adding another template
if inTags {
stmt.TagCount++
}
// Add a space to fill template string with template result
stmt.TemplateString += "%v"
// parse template should return a template type
expr, err := p.ParseTemplate()
// If there is a Template parsing error return it
if err != nil {
return nil, err
}
// Add template to parsed select statement
stmt.Templates = append(stmt.Templates, expr)
// A number signifies that we are in the Timestamp section
} else if tok == NUMBER {
// Add a space to fill template string with timestamp
stmt.TemplateString += "%v"
p.unscan()
// Parse out the Timestamp
ts, err := p.ParseTimestamp()
// If there is a Timestamp parsing error return it
if err != nil {
return nil, err
}
// Set the Timestamp
stmt.Timestamp = ts
// Break loop as InsertStatement ends
break
} else if tok != IDENT && tok != COMMA {
return nil, fmt.Errorf("Error parsing Insert Statement\n Expected: IDENT or COMMA\n Found: %v\n", lit)
} else {
prev = tok
stmt.TemplateString += lit
}
}
return stmt, nil
}
// ParseTemplate returns a Template
func (p *Parser) ParseTemplate() (*statement.Template, error) {
// Blank template
tmplt := &statement.Template{}
for {
// Scan to start loop
tok, lit := p.scanIgnoreWhitespace()
// If the tok == IDENT explicit tags are passed. Add them to the list of tags
if tok == IDENT {
tmplt.Tags = append(tmplt.Tags, lit)
// Different flavors of functions
} else if tok == INT || tok == FLOAT || tok == STR {
p.unscan()
// Parse out the function
fn, err := p.ParseFunction()
// If there is a Function parsing error return it
if err != nil {
return nil, err
}
// Set the Function on the Template
tmplt.Function = fn
// End of Function
} else if tok == RBRACKET {
break
}
}
return tmplt, nil
}
// ParseExecStatement returns a ExecStatement
func (p *Parser) ParseExecStatement() (*statement.ExecStatement, error) {
// NEEDS TO PARSE ACTUAL PATH TO SCRIPT CURRENTLY ONLY DOES
// IDENT SCRIPT NAMES
stmt := &statement.ExecStatement{
StatementID: RandStr(10),
}
if tok, lit := p.scanIgnoreWhitespace(); tok != EXEC {
return nil, fmt.Errorf("Error parsing Exec Statement\n Expected: EXEC\n Found: %v\n", lit)
}
tok, lit := p.scanIgnoreWhitespace()
if tok != IDENT {
return nil, fmt.Errorf("Error parsing Exec Statement\n Expected: IDENT\n Found: %v\n", lit)
}
stmt.Script = lit
return stmt, nil
}
// ParseSetStatement returns a SetStatement
func (p *Parser) ParseSetStatement() (*statement.SetStatement, error) {
stmt := &statement.SetStatement{
StatementID: RandStr(10),
}
if tok, lit := p.scanIgnoreWhitespace(); tok != SET {
return nil, fmt.Errorf("Error parsing Set Statement\n Expected: SET\n Found: %v\n", lit)
}
tok, lit := p.scanIgnoreWhitespace()
if tok != IDENT {
return nil, fmt.Errorf("Error parsing Set Statement\n Expected: IDENT\n Found: %v\n", lit)
}
stmt.Var = lit
tok, lit = p.scanIgnoreWhitespace()
if tok != LBRACKET {
return nil, fmt.Errorf("Error parsing Set Statement\n Expected: RBRACKET\n Found: %v\n", lit)
}
for {
tok, lit = p.scanIgnoreWhitespace()
if tok == RBRACKET {
break
} else if lit != "-" && lit != ":" && tok != IDENT && tok != NUMBER && tok != DURATIONVAL && tok != PERIOD && tok != PIPE {
return nil, fmt.Errorf("Error parsing Set Statement\n Expected: IDENT || NUMBER || DURATION\n Found: %v\n", lit)
}
stmt.Value += lit
}
return stmt, nil
}
// ParseWaitStatement returns a WaitStatement
func (p *Parser) ParseWaitStatement() (*statement.WaitStatement, error) {
stmt := &statement.WaitStatement{
StatementID: RandStr(10),
}
if tok, lit := p.scanIgnoreWhitespace(); tok != WAIT {
return nil, fmt.Errorf("Error parsing Wait Statement\n Expected: WAIT\n Found: %v\n", lit)
}
return stmt, nil
}
// ParseGoStatement returns a GoStatement
func (p *Parser) ParseGoStatement() (*statement.GoStatement, error) {
stmt := &statement.GoStatement{}
stmt.StatementID = RandStr(10)
if tok, lit := p.scanIgnoreWhitespace(); tok != GO {
return nil, fmt.Errorf("Error parsing Go Statement\n Expected: GO\n Found: %v\n", lit)
}
var body statement.Statement
var err error
tok, _ := p.scanIgnoreWhitespace()
switch tok {
case QUERY:
p.unscan()
body, err = p.ParseQueryStatement()
case INSERT:
p.unscan()
body, err = p.ParseInsertStatement()
case EXEC:
p.unscan()
body, err = p.ParseExecStatement()
}
if err != nil {
return nil, err
}
stmt.Statement = body
return stmt, nil
}
// ParseFunction returns a Function
func (p *Parser) ParseFunction() (*statement.Function, error) {
fn := &statement.Function{}
_, lit := p.scanIgnoreWhitespace()
fn.Type = lit
_, lit = p.scanIgnoreWhitespace()
fn.Fn = lit
tok, lit := p.scanIgnoreWhitespace()
if tok != LPAREN {
return nil, fmt.Errorf("Error parsing Insert template function\n Expected: LPAREN\n Found: %v\n", lit)
}
tok, lit = p.scanIgnoreWhitespace()
if tok != NUMBER {
return nil, fmt.Errorf("Error parsing Insert template function\n Expected: NUMBER\n Found: %v\n", lit)
}
// Parse out the integer
i, err := strconv.ParseInt(lit, 10, 64)
if err != nil {
log.Fatalf("Error parsing integer in Insert template function:\n string: %v\n error: %v\n", lit, err)
}
fn.Argument = int(i)
tok, _ = p.scanIgnoreWhitespace()
if tok != RPAREN {
return nil, fmt.Errorf("Error parsing Insert template function\n Expected: RPAREN\n Found: %v\n", lit)
}
tok, lit = p.scanIgnoreWhitespace()
if tok != NUMBER {
return nil, fmt.Errorf("Error parsing Insert template function\n Expected: NUMBER\n Found: %v\n", lit)
}
// Parse out the integer
i, err = strconv.ParseInt(lit, 10, 64)
if err != nil {
log.Fatalf("Error parsing integer in Insert template function:\n string: %v\n error: %v\n", lit, err)
}
fn.Count = int(i)
return fn, nil
}
// ParseTimestamp returns a Timestamp
func (p *Parser) ParseTimestamp() (*statement.Timestamp, error) {
ts := &statement.Timestamp{}
tok, lit := p.scanIgnoreWhitespace()
if tok != NUMBER {
return nil, fmt.Errorf("Error parsing Insert timestamp\n Expected: NUMBER\n Found: %v\n", lit)
}
// Parse out the integer
i, err := strconv.ParseInt(lit, 10, 64)
if err != nil {
log.Fatalf("Error parsing integer in Insert timestamp:\n string: %v\n error: %v\n", lit, err)
}
ts.Count = int(i)
tok, lit = p.scanIgnoreWhitespace()
if tok != DURATIONVAL {
return nil, fmt.Errorf("Error parsing Insert timestamp\n Expected: DURATION\n Found: %v\n", lit)
}
// Parse out the duration
dur, err := time.ParseDuration(lit)
if err != nil {
log.Fatalf("Error parsing duration in Insert timestamp:\n string: %v\n error: %v\n", lit, err)
}
ts.Duration = dur
return ts, nil
}
func (p *Parser) scan() (tok Token, lit string) {
// If we have a token on the buffer, then return it.
if p.buf.n != 0 {
p.buf.n = 0
return p.buf.tok, p.buf.lit
}
// Otherwise read the next token from the scanner.
tok, lit = p.s.Scan()
// Save it to the buffer in case we unscan later.
p.buf.tok, p.buf.lit = tok, lit
return
}
// scanIgnoreWhitespace scans the next non-whitespace token.
func (p *Parser) scanIgnoreWhitespace() (tok Token, lit string) {
tok, lit = p.scan()
if tok == WS {
tok, lit = p.scan()
}
return
}
// unscan pushes the previously read token back onto the buffer.
func (p *Parser) unscan() { p.buf.n = 1 }
// RandStr returns a string of random characters with length n
func RandStr(n int) string {
b := make([]byte, n/2)
_, _ = rand.Read(b)
return fmt.Sprintf("%x", b)
}

View File

@@ -0,0 +1,243 @@
package statement
import (
// "fmt"
"reflect"
"strings"
"testing"
"time"
"github.com/influxdata/influxdb/stress/v2/statement"
)
func newParserFromString(s string) *Parser {
f := strings.NewReader(s)
p := NewParser(f)
return p
}
func TestParser_ParseStatement(t *testing.T) {
var tests = []struct {
skip bool
s string
stmt statement.Statement
err string
}{
// QUERY
{
s: "QUERY basicCount\nSELECT count(%f) FROM cpu\nDO 100",
stmt: &statement.QueryStatement{Name: "basicCount", TemplateString: "SELECT count(%v) FROM cpu", Args: []string{"%f"}, Count: 100},
},
{
s: "QUERY basicCount\nSELECT count(%f) FROM %m\nDO 100",
stmt: &statement.QueryStatement{Name: "basicCount", TemplateString: "SELECT count(%v) FROM %v", Args: []string{"%f", "%m"}, Count: 100},
},
{
skip: true, // SHOULD CAUSE AN ERROR
s: "QUERY\nSELECT count(%f) FROM %m\nDO 100",
err: "Missing Name",
},
// INSERT
{
s: "INSERT mockCpu\ncpu,\nhost=[us-west|us-east|eu-north],server_id=[str rand(7) 1000]\nbusy=[int rand(1000) 100],free=[float rand(10) 0]\n100000 10s",
stmt: &statement.InsertStatement{
Name: "mockCpu",
TemplateString: "cpu,host=%v,server_id=%v busy=%v,free=%v %v",
TagCount: 2,
Templates: []*statement.Template{
&statement.Template{
Tags: []string{"us-west", "us-east", "eu-north"},
},
&statement.Template{
Function: &statement.Function{Type: "str", Fn: "rand", Argument: 7, Count: 1000},
},
&statement.Template{
Function: &statement.Function{Type: "int", Fn: "rand", Argument: 1000, Count: 100},
},
&statement.Template{
Function: &statement.Function{Type: "float", Fn: "rand", Argument: 10, Count: 0},
},
},
Timestamp: &statement.Timestamp{
Count: 100000,
Duration: time.Duration(10 * time.Second),
},
},
},
{
s: "INSERT mockCpu\ncpu,host=[us-west|us-east|eu-north],server_id=[str rand(7) 1000]\nbusy=[int rand(1000) 100],free=[float rand(10) 0]\n100000 10s",
stmt: &statement.InsertStatement{
Name: "mockCpu",
TemplateString: "cpu,host=%v,server_id=%v busy=%v,free=%v %v",
TagCount: 2,
Templates: []*statement.Template{
&statement.Template{
Tags: []string{"us-west", "us-east", "eu-north"},
},
&statement.Template{
Function: &statement.Function{Type: "str", Fn: "rand", Argument: 7, Count: 1000},
},
&statement.Template{
Function: &statement.Function{Type: "int", Fn: "rand", Argument: 1000, Count: 100},
},
&statement.Template{
Function: &statement.Function{Type: "float", Fn: "rand", Argument: 10, Count: 0},
},
},
Timestamp: &statement.Timestamp{
Count: 100000,
Duration: time.Duration(10 * time.Second),
},
},
},
{
s: "INSERT mockCpu\n[str rand(1000) 10],\nhost=[us-west|us-east|eu-north],server_id=[str rand(7) 1000],other=x\nbusy=[int rand(1000) 100],free=[float rand(10) 0]\n100000 10s",
stmt: &statement.InsertStatement{
Name: "mockCpu",
TemplateString: "%v,host=%v,server_id=%v,other=x busy=%v,free=%v %v",
TagCount: 3,
Templates: []*statement.Template{
&statement.Template{
Function: &statement.Function{Type: "str", Fn: "rand", Argument: 1000, Count: 10},
},
&statement.Template{
Tags: []string{"us-west", "us-east", "eu-north"},
},
&statement.Template{
Function: &statement.Function{Type: "str", Fn: "rand", Argument: 7, Count: 1000},
},
&statement.Template{
Function: &statement.Function{Type: "int", Fn: "rand", Argument: 1000, Count: 100},
},
&statement.Template{
Function: &statement.Function{Type: "float", Fn: "rand", Argument: 10, Count: 0},
},
},
Timestamp: &statement.Timestamp{
Count: 100000,
Duration: time.Duration(10 * time.Second),
},
},
},
{
skip: true, // Expected error not working
s: "INSERT\ncpu,\nhost=[us-west|us-east|eu-north],server_id=[str rand(7) 1000]\nbusy=[int rand(1000) 100],free=[float rand(10) 0]\n100000 10s",
err: `found ",", expected WS`,
},
// EXEC
{
s: `EXEC other_script`,
stmt: &statement.ExecStatement{Script: "other_script"},
},
{
skip: true, // Implement
s: `EXEC other_script.sh`,
stmt: &statement.ExecStatement{Script: "other_script.sh"},
},
{
skip: true, // Implement
s: `EXEC ../other_script.sh`,
stmt: &statement.ExecStatement{Script: "../other_script.sh"},
},
{
skip: true, // Implement
s: `EXEC /path/to/some/other_script.sh`,
stmt: &statement.ExecStatement{Script: "/path/to/some/other_script.sh"},
},
// GO
{
skip: true,
s: "GO INSERT mockCpu\ncpu,\nhost=[us-west|us-east|eu-north],server_id=[str rand(7) 1000]\nbusy=[int rand(1000) 100],free=[float rand(10) 0]\n100000 10s",
stmt: &statement.GoStatement{
Statement: &statement.InsertStatement{
Name: "mockCpu",
TemplateString: "cpu,host=%v,server_id=%v busy=%v,free=%v %v",
Templates: []*statement.Template{
&statement.Template{
Tags: []string{"us-west", "us-east", "eu-north"},
},
&statement.Template{
Function: &statement.Function{Type: "str", Fn: "rand", Argument: 7, Count: 1000},
},
&statement.Template{
Function: &statement.Function{Type: "int", Fn: "rand", Argument: 1000, Count: 100},
},
&statement.Template{
Function: &statement.Function{Type: "float", Fn: "rand", Argument: 10, Count: 0},
},
},
Timestamp: &statement.Timestamp{
Count: 100000,
Duration: time.Duration(10 * time.Second),
},
},
},
},
{
skip: true,
s: "GO QUERY basicCount\nSELECT count(free) FROM cpu\nDO 100",
stmt: &statement.GoStatement{
Statement: &statement.QueryStatement{Name: "basicCount", TemplateString: "SELECT count(free) FROM cpu", Count: 100},
},
},
{
skip: true,
s: `GO EXEC other_script`,
stmt: &statement.GoStatement{
Statement: &statement.ExecStatement{Script: "other_script"},
},
},
// SET
{
s: `SET database [stress]`,
stmt: &statement.SetStatement{Var: "database", Value: "stress"},
},
// WAIT
{
s: `Wait`,
stmt: &statement.WaitStatement{},
},
}
for _, tst := range tests {
if tst.skip {
continue
}
stmt, err := newParserFromString(tst.s).Parse()
tst.stmt.SetID("x")
if err != nil && err.Error() != tst.err {
t.Errorf("REAL ERROR: %v\nExpected ERROR: %v\n", err, tst.err)
} else if err != nil && tst.err == err.Error() {
t.Errorf("REAL ERROR: %v\nExpected ERROR: %v\n", err, tst.err)
} else if stmt.SetID("x"); !reflect.DeepEqual(stmt, tst.stmt) {
t.Errorf("Expected\n%#v\n%#v", tst.stmt, stmt)
}
}
}