Connection Options¶
- -h, --host¶
The host to connect to
- -u, --user¶
Username with the necessary privileges
- -p, --password¶
User password
- --default-connection-database¶
Set the database name to connect to. Default: INFORMATION_SCHEMA
- -a, --ask-password¶
Prompt For User password
- -P, --port¶
TCP/IP port to connect to
- -S, --socket¶
UNIX domain socket file to use for connection
- --protocol¶
The protocol to use for connection (tcp, socket)
- -C, --compress-protocol¶
Use compression on the MySQL connection
- --ssl¶
Connect using SSL
- --ssl-mode¶
Desired security state of the connection to the server: DISABLED, PREFERRED, REQUIRED, VERIFY_CA, VERIFY_IDENTITY
- --key¶
The path name to the key file
- --cert¶
The path name to the certificate file
- --ca¶
The path name to the certificate authority file
- --capath¶
The path name to a directory that contains trusted SSL CA certificates in PEM format
- --cipher¶
A list of permissible ciphers to use for SSL encryption
- --tls-version¶
Which protocols the server permits for encrypted connections
- --enable-cleartext-plugin¶
Enable the clear text authentication plugin which is disable by default.
Filter Options¶
- -x, --regex¶
Regular expression for 'db.table' matching
- -B, --database¶
Comma delimited list of databases to dump
- -i, --ignore-engines¶
Comma delimited list of storage engines to ignore
- --where¶
Dump only selected records.
- -U, --updated-since¶
Use Update_time to dump only tables updated in the last U days
- --partition-regex¶
Regex to filter by partition name.
- -O, --omit-from-file¶
File containing a list of database.table entries to skip, one per line (skips before applying regex option)
- -T, --tables-list¶
Comma delimited table list to dump (does not exclude regex option). Table name must include database name. For instance: test.t1,test.t2
Lock Options¶
- -z, --tidb-snapshot¶
Snapshot to use for TiDB
- -k, --no-locks¶
This option is deprecated use --sync-thread-lock-mode instead
- --lock-all-tables¶
This option is deprecated use --sync-thread-lock-mode instead
- --sync-thread-lock-mode¶
There are 3 modes that can be use to sync: FTWRL, LOCK_ALL and GTID. If you don't need a consistent backup, use: NO_LOCK. More info https://mydumper.github.io/mydumper/docs/html/locks.html. Default: AUTO which uses the best option depending on the database vendor
- --use-savepoints¶
Use savepoints to reduce metadata locking issues, needs SUPER privilege
- --no-backup-locks¶
Do not use Percona backup locks
- --less-locking¶
This option is deprecated and its behaviour is the default which is useful if you don't have transaction tables. Use --trx-tables otherwise
- --trx-consistency-only¶
This option is deprecated use --trx-tables instead
- --trx-tables¶
The backup process changes, if we know that we are exporting transactional tables only
- --skip-ddl-locks¶
Do not send DDL locks when possible
PMM Options¶
- --pmm-path¶
which default value will be /usr/local/percona/pmm2/collectors/textfile-collector/high-resolution
- --pmm-resolution¶
which default will be high
Exec Options¶
- --exec-threads¶
Amount of threads to use with --exec
- --exec¶
Command to execute using the file as parameter
- --exec-per-thread¶
Set the command that will receive by STDIN and write in the STDOUT into the output file
- --exec-per-thread-extension¶
Set the extension for the STDOUT file when --exec-per-thread is used
- --long-query-retries¶
Retry checking for long queries, default 0 (do not retry)
- --long-query-retry-interval¶
Time to wait before retrying the long query check in seconds, default 60
- -l, --long-query-guard¶
Set long query timer in seconds, default 60
- -K, --kill-long-queries¶
Kill long running queries (instead of aborting)
Job Options¶
- --max-time-per-select¶
Maximum amount of seconds that a select should take. Default: 2
- --max-threads-per-table¶
Maximum number of threads per table to use
- --use-single-column¶
It will ignore if the table has multiple columns and use only the first column to split the table
- -r, --rows¶
Splitting tables into chunks of this many rows. It can be MIN:START_AT:MAX. MAX can be 0 which means that there is no limit. It will double the chunk size if query takes less than 1 second and half of the size if it is more than 2 seconds
- --rows-hard¶
This set the MIN and MAX limit when even if --rows is 0
- --split-partitions¶
Dump partitions into separate files. This option overrides the --rows option for partitioned tables.
Checksum Options¶
- -M, --checksum-all¶
Dump checksums for all elements
- --data-checksums¶
Dump table checksums with the data
- --schema-checksums¶
Dump schema table and view creation checksums
- --routine-checksums¶
Dump triggers, functions and routines checksums
Objects Options¶
- -m, --no-schemas¶
Do not dump table schemas with the data and triggers
- -Y, --all-tablespaces¶
Dump all the tablespaces.
- -d, --no-data¶
Do not dump table data
- -G, --triggers¶
Dump triggers. By default, it do not dump triggers
- -E, --events¶
Dump events. By default, it do not dump events
- -R, --routines¶
Dump stored procedures and functions. By default, it does not dump stored procedures nor functions
- --skip-constraints¶
Remove the constraints from the CREATE TABLE statement. By default, the statement is not modified
- --skip-indexes¶
Remove the indexes from the CREATE TABLE statement. By default, the statement is not modified
- --views-as-tables¶
Export VIEWs as they were tables
- -W, --no-views¶
Do not dump VIEWs
Statement Options¶
- --load-data¶
Instead of creating INSERT INTO statements, it creates LOAD DATA statements and .dat files. This option will be deprecated on future releases use --format
- --csv¶
Automatically enables --load-data and set variables to export in CSV format. This option will be deprecated on future releases use --format
- --format¶
Set the output format which can be INSERT, LOAD_DATA, CSV or CLICKHOUSE. Default: INSERT
- --include-header¶
When --load-data or --csv is used, it will include the header with the column name
- --fields-terminated-by¶
Defines the character that is written between fields
- --fields-enclosed-by¶
Defines the character to enclose fields. Default: "
- --fields-escaped-by¶
Single character that is going to be used to escape characters in theLOAD DATA statement, default: ''
- --lines-starting-by¶
Adds the string at the beginning of each row. When --load-data is used it is added to the LOAD DATA statement. It affects INSERT INTO statements also when it is used.
- --lines-terminated-by¶
Adds the string at the end of each row. When --load-data is used it is added to the LOAD DATA statement. It affects INSERT INTO statements also when it is used.
- --statement-terminated-by¶
This might never be used, unless you know what are you doing
- -N, --insert-ignore¶
Dump rows with INSERT IGNORE
- --replace¶
Dump rows with REPLACE
- --complete-insert¶
Use complete INSERT statements that include column names
- --hex-blob¶
Dump binary columns using hexadecimal notation
- --skip-definer¶
Removes DEFINER from the CREATE statement. By default, statements are not modified
- -s, --statement-size¶
Attempted size of INSERT statement in bytes, default 1000000
- --tz-utc¶
SET TIME_ZONE='+00:00' at top of dump to allow dumping of TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones, defaults to on use --skip-tz-utc to disable.
- --skip-tz-utc¶
Doesn't add SET TIMEZONE on the backup files
- --set-names¶
Accepts a list of up to 2 charsets, and executes 'SET NAMES' with the proper charset from the list, where the first item when executes SHOW CREATE TABLE and the second item for the rest. Use it at your own risk as it might cause inconsistencies #1974. Default: auto,binary. auto means that it is going to use the table character set.
- --default-character-set¶
Accepts a list of up to 2 charsets, and adds 'SET NAMES' with the proper charset from the list, where the first item for the schema files and the second item for the data files. Use it at your own risk as it might cause inconsistencies #1974. Default: binary,binary
- --table-engine-for-view-dependency¶
Table engine to be used for the CREATE TABLE statement for temporary tables when using views
Extra Options¶
- -F, --chunk-filesize¶
Split data files into pieces of this size in MB. Useful for myloader multi-threading.
- --exit-if-broken-table-found¶
Exits if a broken table has been found
- --success-on-1146¶
This option is deprecated use --ignore-errors instead
- -e, --build-empty-files¶
Build dump files even if no data available from table
- --no-check-generated-fields¶
Queries related to generated fields are not going to be executed.It will lead to restoration issues if you have generated columns
- --order-by-primary¶
Sort the data by Primary Key or Unique key if no primary key exists
- --compact¶
Give less verbose output. Disables header/footer constructs.
- -c, --compress¶
Compress output files using: gzip and zstd. Options: gzip and zstd. Default: gzip. On future releases the default will be zstd
- --use-defer¶
Use defer integer sharding until all non-integer PK tables processed (saves RSS for huge quantities of tables)
- --check-row-count¶
Run SELECT COUNT(*) and fail mydumper if dumped row count is different
Daemon Options¶
- -D, --daemon¶
Enable daemon mode
- -I, --snapshot-interval¶
Interval between each dump snapshot (in minutes), requires --daemon, default 60
- -X, --snapshot-count¶
number of snapshots, default 2
Application Options:¶
- -?, --help¶
Show help options
- -o, --outputdir¶
Directory to output files to
- --clear¶
Clear output directory before dumping
- --dirty¶
Overwrite output directory without clearing (beware of leftower chunks)
- --merge¶
Merge the metadata with previous backup and overwrite output directory without clearing (beware of leftower chunks)
- --stream¶
It will stream over STDOUT once the files has been written. Since v0.12.7-1, accepts NO_DELETE, NO_STREAM_AND_NO_DELETE and TRADITIONAL which is the default value and used if no parameter is given and also NO_STREAM since v0.16.3-1
- -L, --logfile¶
Log file name to use, by default stdout is used
- --disk-limits¶
Set the limit to pause and resume if determines there is no enough disk space.Accepts values like: '<resume>:<pause>' in MB.For instance: 100:500 will pause when there is only 100MB free and will resume if 500MB are available
- --masquerade-filename¶
Masquerades the filenames
- --ftwrl-max-wait-time¶
Sets the max time that we are going to wait before kill the FLUSH TABLES related commands. Default: 60
- --ftwrl-timeout-retries¶
Sets the amount of retries before give up acquiring FLUSH TABLES. Default: 0, never gives up.
- --replica-data¶
Includes the replica information
- --source-data¶
It will include the options in the metadata file, to allow myloader to establish replication
- -t, --threads¶
Number of threads to use, 0 means to use number of CPUs. Default: 4, Minimum: 2
- -V, --version¶
Show the program version and exit
- -v, --verbose¶
Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
- --debug¶
Turn on debugging output (automatically sets verbosity to 3)
- --ignore-errors¶
Not increment error count and Warning instead of Critical in case of any of the comma-separated error number list
- --defaults-file¶
Use a specific defaults file. Default: /etc/mydumper.cnf
- --defaults-extra-file¶
Use an additional defaults file. This is loaded after --defaults-file, replacing previous defined values
- --source-control-command¶
Instruct the proper commands to execute depending where are configuring the replication. Options: TRADITIONAL, AWS
- --optimize-keys-engines¶
List of engines that will be used to split the create table statement into multiple stages if possible. Default: InnoDB,ROCKSDB
- --server-version¶
Set the server version avoid automatic detection
- --throttle¶
Expects a string like Threads_running=10. It will check the SHOW GLOBAL STATUS and if it is higher, it will increase the sleep time between SELECT. If option is used without parameters it will use Threads_running and the amount of threads