Advertisement
Guest User

Untitled

a guest
Jan 17th, 2017
220
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.44 KB | None | 0 0
  1. # Prerequisites
  2. 1. [AWS Aurora to Maxwell Kafka Producer](/abacaphiliac/a3e8fe94152fc1d27368c54dc2f431b4)
  3.  
  4. After running through the prerequisites, you will have:
  5. * an AWS Aurora instance
  6. * a Kafka service named `kafka`, listening on `kafka:9092`
  7.  
  8. # Start Maxwell with Namespaced Topic Kafka Producer
  9. This is a slight variation of the prerequisite, [AWS Aurora to Maxwell Kafka Producer](/abacaphiliac/a3e8fe94152fc1d27368c54dc2f431b4).
  10. In the prerequisite we ran Maxwell with the default Kafka Producer configuration which will Produce messages on the `maxwell` topic.
  11. In this example we are overriding the `MAXWELL_OPTIONS` environment variable and specifying a dynamic topic name, so that
  12. Maxwell will route messages from each table to topics by the same name, namespaced by database name.
  13.  
  14. ```
  15. docker run -it --rm \
  16. --env MYSQL_USERNAME=AURORA_USERNAME \
  17. --env MYSQL_PASSWORD=AURORA_PASSWORD \
  18. --env MYSQL_HOST=AURORA_HOST \
  19. --link kafka \
  20. --env KAFKA_HOST=kafka \
  21. --env KAFKA_PORT=9092 \
  22. --env MAXWELL_OPTIONS="--kafka_topic=maxwell_%{database}_%{table}"
  23. --name maxwell \
  24. maxwell
  25. ```
  26.  
  27. output:
  28. ```
  29. 17:44:34,901 INFO ProducerConfig - ProducerConfig values:
  30. request.timeout.ms = 30000
  31. retry.backoff.ms = 100
  32. buffer.memory = 33554432
  33. ssl.truststore.password = null
  34. batch.size = 16384
  35. ssl.keymanager.algorithm = SunX509
  36. receive.buffer.bytes = 32768
  37. ssl.cipher.suites = null
  38. ssl.key.password = null
  39. sasl.kerberos.ticket.renew.jitter = 0.05
  40. ssl.provider = null
  41. sasl.kerberos.service.name = null
  42. max.in.flight.requests.per.connection = 5
  43. sasl.kerberos.ticket.renew.window.factor = 0.8
  44. bootstrap.servers = [kafka:9092]
  45. client.id =
  46. max.request.size = 1048576
  47. acks = 1
  48. linger.ms = 0
  49. sasl.kerberos.kinit.cmd = /usr/bin/kinit
  50. ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  51. metadata.fetch.timeout.ms = 60000
  52. ssl.endpoint.identification.algorithm = null
  53. ssl.keystore.location = null
  54. value.serializer = class org.apache.kafka.common.serialization.StringSerializer
  55. ssl.truststore.location = null
  56. ssl.keystore.password = null
  57. key.serializer = class org.apache.kafka.common.serialization.StringSerializer
  58. block.on.buffer.full = false
  59. metrics.sample.window.ms = 30000
  60. metadata.max.age.ms = 300000
  61. security.protocol = PLAINTEXT
  62. ssl.protocol = TLS
  63. sasl.kerberos.min.time.before.relogin = 60000
  64. timeout.ms = 30000
  65. connections.max.idle.ms = 540000
  66. ssl.trustmanager.algorithm = PKIX
  67. metric.reporters = []
  68. compression.type = none
  69. ssl.truststore.type = JKS
  70. max.block.ms = 60000
  71. retries = 0
  72. send.buffer.bytes = 131072
  73. partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
  74. reconnect.backoff.ms = 50
  75. metrics.num.samples = 2
  76. ssl.keystore.type = JKS
  77.  
  78. 17:44:34,952 INFO AppInfoParser - Kafka version : 0.9.0.1
  79. 17:44:34,952 INFO AppInfoParser - Kafka commitId : 23c69d62a0cabf06
  80. 17:44:35,012 INFO Maxwell - Maxwell v1.7.0 is booting (MaxwellKafkaProducer), starting at BinlogPosition[mysql-bin-changelog.000002:84337]
  81. 17:44:35,680 INFO MysqlSavedSchema - Restoring schema id 1 (last modified at BinlogPosition[mysql-bin-changelog.000002:3521])
  82. 17:44:38,991 INFO OpenReplicator - starting replication at mysql-bin-changelog.000002:84337
  83. ```
  84.  
  85. The process is now waiting for new data events.
  86.  
  87. # Start a consumer (in another terminal window)
  88. This command will start an unnamed instance of `spotify/kafka` linked to the `kafka` service, start a consumer, display existing messages from the `maxwell` topic, and wait for new messages until you quit (which destroys the container):
  89. ```
  90. docker run -it --rm --link kafka spotify/kafka /opt/kafka_2.11-0.10.1.0/bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic maxwell_{AURORA_DATABASE}_{AURORA_TABLE} --from-beginning
  91. ```
  92.  
  93. Connect to the AWS Aurora instance, insert some records, and update some records. Data events from Maxwell will be printed in the Consumer terminal window:
  94. ```
  95. {"database":"AURORA_DATABASE","table":"AURORA_TABLE","type":"update","ts":1484606003,"xid":1655558,"commit":true,"data":{"id":4,"first_name":"Tim","last_name":"Younger"},"old":{"first_name":"Timothy"}}
  96. {"database":"AURORA_DATABASE","table":"AURORA_TABLE","type":"update","ts":1484606435,"xid":1658343,"commit":true,"data":{"id":4,"first_name":"Timothy","last_name":"Younger"},"old":{"first_name":"Tim"}}
  97. {"database":"AURORA_DATABASE","table":"AURORA_TABLE","type":"update","ts":1484606451,"xid":1658455,"commit":true,"data":{"id":4,"first_name":"Tim","last_name":"Younger"},"old":{"first_name":"Timothy"}}
  98. ```
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement