1
0
mirror of https://github.com/Oxalide/vsphere-influxdb-go.git synced 2023-10-10 11:36:51 +00:00

add vendoring with go dep

This commit is contained in:
Adrian Todorov
2017-10-25 20:52:40 +00:00
parent 704f4d20d1
commit a59409f16b
1627 changed files with 489673 additions and 0 deletions

View File

@@ -0,0 +1,214 @@
# Import/Export
## Exporting from 0.8.9
Version `0.8.9` of InfluxDB adds support to export your data to a format that can be imported into `0.9.3` and later.
Note that `0.8.9` can be found here:
```
http://get.influxdb.org.s3.amazonaws.com/influxdb_0.8.9_amd64.deb
http://get.influxdb.org.s3.amazonaws.com/influxdb-0.8.9-1.x86_64.rpm
```
### Design
`0.8.9` exports raw data to a flat file that includes two sections, `DDL` and `DML`. You can choose to export them independently (see below).
The `DDL` section contains the sql commands to create databases and retention policies. the `DML` section is [line protocol](https://github.com/influxdata/influxdb/blob/master/tsdb/README.md) and can be directly posted to the [http endpoint](https://docs.influxdata.com/influxdb/v0.10/guides/writing_data) in `0.10`. Remember that batching is important and we don't recommend batch sizes over 5k without further testing.
Example export file:
```
# DDL
CREATE DATABASE db0
CREATE DATABASE db1
CREATE RETENTION POLICY rp1 ON db1 DURATION 1h REPLICATION 1
# DML
# CONTEXT-DATABASE:db0
# CONTEXT-RETENTION-POLICY:autogen
cpu,host=server1 value=33.3 1464026335000000000
cpu,host=server1 value=43.3 1464026395000000000
cpu,host=server1 value=63.3 1464026575000000000
# CONTEXT-DATABASE:db1
# CONTEXT-RETENTION-POLICY:rp1
cpu,host=server1 value=73.3 1464026335000000000
cpu,host=server1 value=83.3 1464026395000000000
cpu,host=server1 value=93.3 1464026575000000000
```
You need to specify a database and shard group when you export.
To list out your shards, use the following http endpoint:
`/cluster/shard_spaces`
example:
```sh
http://username:password@localhost:8086/cluster/shard_spaces
```
Then, to export a database with then name "metrics" and a shard space with the name "default", issue the following curl command:
```sh
curl -o export http://username:password@localhost:8086/export/metrics/default
```
Compression is supported, and will result in a significantly smaller file size.
Use the following command for compression:
```sh
curl -o export.gz --compressed http://username:password@localhost:8086/export/metrics/default
```
You can also export just the `DDL` with this option:
```sh
curl -o export.ddl http://username:password@localhost:8086/export/metrics/default?l=ddl
```
Or just the `DML` with this option:
```sh
curl -o export.dml.gz --compressed http://username:password@localhost:8086/export/metrics/default?l=dml
```
### Assumptions
- Series name mapping follows these [guidelines](https://docs.influxdata.com/influxdb/v0.8/advanced_topics/schema_design/)
- Database name will map directly from `0.8` to `0.10`
- Shard Spaces map to Retention Policies
- Shard Space Duration is ignored, as in `0.10` we determine shard size automatically
- Regex is used to match the correct series names and only exports that data for the database
- Duration becomes the new Retention Policy duration
- Users are not migrated due to inability to get passwords. Anyone using users will need to manually set these back up in `0.10`
### Upgrade Recommendations
It's recommended that you upgrade to `0.9.3` or later first and have all your writes going there. Then, on the `0.8.X` instances, upgrade to `0.8.9`.
It is important that when exporting you change your config to allow for the http endpoints not timing out. To do so, make this change in your config:
```toml
# Configure the http api
[api]
read-timeout = "0s"
```
### Exceptions
If a series can't be exported to tags based on the guidelines mentioned above,
we will insert the entire series name as the measurement name. You can either
allow that to import into the new InfluxDB instance, or you can do your own
data massage on it prior to importing it.
For example, if you have the following series name:
```
metric.disk.c.host.server01.single
```
It will export as exactly thta as the measurement name and no tags:
```
metric.disk.c.host.server01.single
```
### Export Metrics
When you export, you will now get comments inline in the `DML`:
`# Found 999 Series for export`
As well as count totals for each series exported:
`# Series FOO - Points Exported: 999`
With a total at the bottom:
`# Points Exported: 999`
You can grep the file that was exported at the end to get all the export metrics:
`cat myexport | grep Exported`
## Importing
Version `0.9.3` of InfluxDB adds support to import your data from version `0.8.9`.
## Caveats
For the export/import to work, all requisites have to be met. For export, all series names in `0.8` should be in the following format:
```
<tagName>.<tagValue>.<tagName>.<tagValue>.<measurement>
```
for example:
```
az.us-west-1.host.serverA.cpu
```
or any number of tags
```
building.2.temperature
```
Additionally, the fields need to have a consistent type (all float64, int64, etc) for every write in `0.8`. Otherwise they have the potential to fail writes in the import.
See below for more information.
## Running the import command
To import via the cli, you can specify the following command:
```sh
influx -import -path=metrics-default.gz -compressed
```
If the file is not compressed you can issue it without the `-compressed` flag:
```sh
influx -import -path=metrics-default
```
To redirect failed import lines to another file, run this command:
```sh
influx -import -path=metrics-default.gz -compressed > failures
```
The import will use the line protocol in batches of 5,000 lines per batch when sending data to the server.
### Throttiling the import
If you need to throttle the import so the database has time to ingest, you can use the `-pps` flag. This will limit the points per second that will be sent to the server.
```sh
influx -import -path=metrics-default.gz -compressed -pps 50000 > failures
```
Which is stating that you don't want MORE than 50,000 points per second to write to the database. Due to the processing that is taking place however, you will likely never get exactly 50,000 pps, more like 35,000 pps, etc.
## Understanding the results of the import
During the import, a status message will write out for every 100,000 points imported and report stats on the progress of the import:
```
2015/08/21 14:48:01 Processed 3100000 lines. Time elapsed: 56.740578415s. Points per second (PPS): 54634
```
The batch will give some basic stats when finished:
```sh
2015/07/29 23:15:20 Processed 2 commands
2015/07/29 23:15:20 Processed 70207923 inserts
2015/07/29 23:15:20 Failed 29785000 inserts
```
Most inserts fail due to the following types of error:
```sh
2015/07/29 22:18:28 error writing batch: write failed: field type conflict: input field "value" on measurement "metric" is type float64, already exists as type integer
```
This is due to the fact that in `0.8` a field could get created and saved as int or float types for independent writes. In `0.9` and greater the field has to have a consistent type.

View File

@@ -0,0 +1,252 @@
// Package v8 contains code for importing data from 0.8 instances of InfluxDB.
package v8 // import "github.com/influxdata/influxdb/importer/v8"
import (
"bufio"
"compress/gzip"
"fmt"
"io"
"log"
"os"
"strings"
"time"
"github.com/influxdata/influxdb/client"
)
const batchSize = 5000
// Config is the config used to initialize a Importer importer
type Config struct {
Path string // Path to import data.
Version string
Compressed bool // Whether import data is gzipped.
PPS int // points per second importer imports with.
client.Config
}
// NewConfig returns an initialized *Config
func NewConfig() Config {
return Config{Config: client.NewConfig()}
}
// Importer is the importer used for importing 0.8 data
type Importer struct {
client *client.Client
database string
retentionPolicy string
config Config
batch []string
totalInserts int
failedInserts int
totalCommands int
throttlePointsWritten int
lastWrite time.Time
throttle *time.Ticker
}
// NewImporter will return an intialized Importer struct
func NewImporter(config Config) *Importer {
config.UserAgent = fmt.Sprintf("influxDB importer/%s", config.Version)
return &Importer{
config: config,
batch: make([]string, 0, batchSize),
}
}
// Import processes the specified file in the Config and writes the data to the databases in chunks specified by batchSize
func (i *Importer) Import() error {
// Create a client and try to connect.
cl, err := client.NewClient(i.config.Config)
if err != nil {
return fmt.Errorf("could not create client %s", err)
}
i.client = cl
if _, _, e := i.client.Ping(); e != nil {
return fmt.Errorf("failed to connect to %s\n", i.client.Addr())
}
// Validate args
if i.config.Path == "" {
return fmt.Errorf("file argument required")
}
defer func() {
if i.totalInserts > 0 {
log.Printf("Processed %d commands\n", i.totalCommands)
log.Printf("Processed %d inserts\n", i.totalInserts)
log.Printf("Failed %d inserts\n", i.failedInserts)
}
}()
// Open the file
f, err := os.Open(i.config.Path)
if err != nil {
return err
}
defer f.Close()
var r io.Reader
// If gzipped, wrap in a gzip reader
if i.config.Compressed {
gr, err := gzip.NewReader(f)
if err != nil {
return err
}
defer gr.Close()
// Set the reader to the gzip reader
r = gr
} else {
// Standard text file so our reader can just be the file
r = f
}
// Get our reader
scanner := bufio.NewScanner(r)
// Process the DDL
i.processDDL(scanner)
// Set up our throttle channel. Since there is effectively no other activity at this point
// the smaller resolution gets us much closer to the requested PPS
i.throttle = time.NewTicker(time.Microsecond)
defer i.throttle.Stop()
// Prime the last write
i.lastWrite = time.Now()
// Process the DML
i.processDML(scanner)
// Check if we had any errors scanning the file
if err := scanner.Err(); err != nil {
return fmt.Errorf("reading standard input: %s", err)
}
// If there were any failed inserts then return an error so that a non-zero
// exit code can be returned.
if i.failedInserts > 0 {
plural := " was"
if i.failedInserts > 1 {
plural = "s were"
}
return fmt.Errorf("%d point%s not inserted", i.failedInserts, plural)
}
return nil
}
func (i *Importer) processDDL(scanner *bufio.Scanner) {
for scanner.Scan() {
line := scanner.Text()
// If we find the DML token, we are done with DDL
if strings.HasPrefix(line, "# DML") {
return
}
if strings.HasPrefix(line, "#") {
continue
}
// Skip blank lines
if strings.TrimSpace(line) == "" {
continue
}
i.queryExecutor(line)
}
}
func (i *Importer) processDML(scanner *bufio.Scanner) {
start := time.Now()
for scanner.Scan() {
line := scanner.Text()
if strings.HasPrefix(line, "# CONTEXT-DATABASE:") {
i.database = strings.TrimSpace(strings.Split(line, ":")[1])
}
if strings.HasPrefix(line, "# CONTEXT-RETENTION-POLICY:") {
i.retentionPolicy = strings.TrimSpace(strings.Split(line, ":")[1])
}
if strings.HasPrefix(line, "#") {
continue
}
// Skip blank lines
if strings.TrimSpace(line) == "" {
continue
}
i.batchAccumulator(line, start)
}
// Call batchWrite one last time to flush anything out in the batch
i.batchWrite()
}
func (i *Importer) execute(command string) {
response, err := i.client.Query(client.Query{Command: command, Database: i.database})
if err != nil {
log.Printf("error: %s\n", err)
return
}
if err := response.Error(); err != nil {
log.Printf("error: %s\n", response.Error())
}
}
func (i *Importer) queryExecutor(command string) {
i.totalCommands++
i.execute(command)
}
func (i *Importer) batchAccumulator(line string, start time.Time) {
i.batch = append(i.batch, line)
if len(i.batch) == batchSize {
i.batchWrite()
i.batch = i.batch[:0]
// Give some status feedback every 100000 lines processed
processed := i.totalInserts + i.failedInserts
if processed%100000 == 0 {
since := time.Since(start)
pps := float64(processed) / since.Seconds()
log.Printf("Processed %d lines. Time elapsed: %s. Points per second (PPS): %d", processed, since.String(), int64(pps))
}
}
}
func (i *Importer) batchWrite() {
// Accumulate the batch size to see how many points we have written this second
i.throttlePointsWritten += len(i.batch)
// Find out when we last wrote data
since := time.Since(i.lastWrite)
// Check to see if we've exceeded our points per second for the current timeframe
var currentPPS int
if since.Seconds() > 0 {
currentPPS = int(float64(i.throttlePointsWritten) / since.Seconds())
} else {
currentPPS = i.throttlePointsWritten
}
// If our currentPPS is greater than the PPS specified, then we wait and retry
if int(currentPPS) > i.config.PPS && i.config.PPS != 0 {
// Wait for the next tick
<-i.throttle.C
// Decrement the batch size back out as it is going to get called again
i.throttlePointsWritten -= len(i.batch)
i.batchWrite()
return
}
_, e := i.client.WriteLineProtocol(strings.Join(i.batch, "\n"), i.database, i.retentionPolicy, i.config.Precision, i.config.WriteConsistency)
if e != nil {
log.Println("error writing batch: ", e)
// Output failed lines to STDOUT so users can capture lines that failed to import
fmt.Println(strings.Join(i.batch, "\n"))
i.failedInserts += len(i.batch)
} else {
i.totalInserts += len(i.batch)
}
i.throttlePointsWritten = 0
i.lastWrite = time.Now()
return
}