Connection Options

-h, --host

The host to connect to

-u, --user

Username with the necessary privileges

-p, --password

User password

-a, --ask-password

Prompt For User password

-P, --port

TCP/IP port to connect to

-S, --socket

UNIX domain socket file to use for connection

--protocol

The protocol to use for connection (tcp, socket)

-C, --compress-protocol

Use compression on the MySQL connection

--ssl

Connect using SSL

--ssl-mode

Desired security state of the connection to the server: REQUIRED, VERIFY_IDENTITY

--key

The path name to the key file

--cert

The path name to the certificate file

--ca

The path name to the certificate authority file

--capath

The path name to a directory that contains trusted SSL CA certificates in PEM format

--cipher

A list of permissible ciphers to use for SSL encryption

--tls-version

Which protocols the server permits for encrypted connections

Filter Options

-x, --regex

Regular expression for ‘db.table’ matching

-B, --database

Comma delimited list of databases to dump

-i, --ignore-engines

Comma delimited list of storage engines to ignore

--where

Dump only selected records.

-U, --updated-since

Use Update_time to dump only tables updated in the last U days

--partition-regex

Regex to filter by partition name.

-O, --omit-from-file

File containing a list of database.table entries to skip, one per line (skips before applying regex option)

-T, --tables-list

Comma delimited table list to dump (does not exclude regex option). Table name must include database name. For instance: test.t1,test.t2

Lock Options

-z, --tidb-snapshot

Snapshot to use for TiDB

-k, --no-locks

Do not execute the temporary shared read lock.

WARNING: This will cause inconsistent backups

--use-savepoints

Use savepoints to reduce metadata locking issues, needs SUPER privilege

--no-backup-locks

Do not use Percona backup locks

--lock-all-tables

Use LOCK TABLE for all, instead of FTWRL

--less-locking

Minimize locking time on InnoDB tables.

--trx-consistency-only

Transactional consistency only

--skip-ddl-locks

Do not send DDL locks when possible

PMM Options

--pmm-path

which default value will be /usr/local/percona/pmm2/collectors/textfile-collector/high-resolution

--pmm-resolution

which default will be high

Exec Options

--exec-threads

Amount of threads to use with –exec

--exec

Command to execute using the file as parameter

--exec-per-thread

Set the command that will receive by STDIN and write in the STDOUT into the output file

--exec-per-thread-extension

Set the extension for the STDOUT file when –exec-per-thread is used

--long-query-retries

Retry checking for long queries, default 0 (do not retry)

--long-query-retry-interval

Time to wait before retrying the long query check in seconds, default 60

-l, --long-query-guard

Set long query timer in seconds, default 60

-K, --kill-long-queries

Kill long running queries (instead of aborting)

Job Options

--max-threads-per-table

Maximum number of threads per table to use

--char-deep

Defines the amount of characters to use when the primary key is a string

--char-chunk

Defines in how many pieces should split the table. By default we use the amount of threads

-r, --rows

Spliting tables into chunks of this many rows. It can be MIN:START_AT:MAX. MAX can be 0 which means that there is no limit. It will double the chunk size if query takes less than 1 second and half of the size if it is more than 2 seconds

--split-partitions

Dump partitions into separate files. This options overrides the –rows option for partitioned tables.

Checksum Options

-M, --checksum-all

Dump checksums for all elements

--data-checksums

Dump table checksums with the data

--schema-checksums

Dump schema table and view creation checksums

--routine-checksums

Dump triggers, functions and routines checksums

Objects Options

-m, --no-schemas

Do not dump table schemas with the data and triggers

-Y, --all-tablespaces

Dump all the tablespaces.

-d, --no-data

Do not dump table data

-G, --triggers

Dump triggers. By default, it do not dump triggers

-E, --events

Dump events. By default, it do not dump events

-R, --routines

Dump stored procedures and functions. By default, it do not dump stored procedures nor functions

--skip-constraints

Remove the constraints from the CREATE TABLE statement. By default, the statement is not modified

--skip-indexes

Remove the indexes from the CREATE TABLE statement. By default, the statement is not modified

--views-as-tables

Export VIEWs as they were tables

-W, --no-views

Do not dump VIEWs

Statement Options

--load-data

Instead of creating INSERT INTO statements, it creates LOAD DATA statements and .dat files. This option will be deprecated on future releases use –format

--csv

Automatically enables –load-data and set variables to export in CSV format. This option will be deprecated on future releases use –format

--format

Set the output format which can be INSERT, LOAD_DATA, CSV or CLICKHOUSE. Default: INSERT

--include-header

When –load-data or –csv is used, it will include the header with the column name

--fields-terminated-by

Defines the character that is written between fields

--fields-enclosed-by

Defines the character to enclose fields. Default: “

--fields-escaped-by

Single character that is going to be used to escape characters in theLOAD DATA stament, default: ‘’

--lines-starting-by

Adds the string at the begining of each row. When –load-data is usedit is added to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used.

--lines-terminated-by

Adds the string at the end of each row. When –load-data is used it isadded to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used.

--statement-terminated-by

This might never be used, unless you know what are you doing

-N, --insert-ignore

Dump rows with INSERT IGNORE

--replace

Dump rows with REPLACE

--complete-insert

Use complete INSERT statements that include column names

--hex-blob

Dump binary columns using hexadecimal notation

--skip-definer

Removes DEFINER from the CREATE statement. By default, statements are not modified

-s, --statement-size

Attempted size of INSERT statement in bytes, default 1000000

--tz-utc

SET TIME_ZONE=’+00:00’ at top of dump to allow dumping of TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones, defaults to on use –skip-tz-utc to disable.

--skip-tz-utc

Doesn’t add SET TIMEZONE on the backup files

--set-names

Sets the names, use it at your own risk, default binary

Extra Options

-F, --chunk-filesize

Split data files into pieces of this size in MB. Useful for myloader multi-threading.

--exit-if-broken-table-found

Exits if a broken table has been found

--success-on-1146

Not increment error count and Warning instead of Critical in case of table doesn’t exist

-e, --build-empty-files

Build dump files even if no data available from table

--no-check-generated-fields

Queries related to generated fields are not going to be executed.It will lead to restoration issues if you have generated columns

--order-by-primary

Sort the data by Primary Key or Unique key if no primary key exists

--compact

Give less verbose output. Disables header/footer constructs.

-c, --compress

Compress output files using: /usr/bin/gzip and /usr/bin/zstd. Options: GZIP and ZSTD. Default: GZIP

--use-defer

Use defer integer sharding until all non-integer PK tables processed (saves RSS for huge quantities of tables)

--check-row-count

Run SELECT COUNT(*) and fail mydumper if dumped row count is different

--source-data

It will include the options in the metadata file, to allow myloader to establish replication

Daemon Options

-D, --daemon

Enable daemon mode

-I, --snapshot-interval

Interval between each dump snapshot (in minutes), requires –daemon, default 60

-X, --snapshot-count

number of snapshots, default 2

Application Options:

-?, --help

Show help options

-o, --outputdir

Directory to output files to

--clear

Clear output directory before dumping

--dirty

Overwrite output directory without clearing (beware of leftower chunks)

--stream

It will stream over STDOUT once the files has been written. Since v0.12.7-1, accepts NO_DELETE, NO_STREAM_AND_NO_DELETE and TRADITIONAL which is the default value and used if no parameter is given

-L, --logfile

Log file name to use, by default stdout is used

--disk-limits

Set the limit to pause and resume if determines there is no enough disk space.Accepts values like: ‘<resume>:<pause>’ in MB.For instance: 100:500 will pause when there is only 100MB free and willresume if 500MB are available

-t, --threads

Number of threads to use, 0 means to use number of CPUs. Default: 4

-V, --version

Show the program version and exit

-v, --verbose

Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2

--debug

Turn on debugging output (automatically sets verbosity to 3)

--defaults-file

Use a specific defaults file. Default: /etc/mydumper.cnf

--defaults-extra-file

Use an additional defaults file. This is loaded after –defaults-file, replacing previous defined values

--source-control-command

Instruct the proper commands to execute depending where are configuring the replication. Options: TRADITIONAL, AWS