Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- create test database
- create device_channel_feeds table
- insert sessions feed from a device on brck-cloud
- connect kafka to the database and import feed into a topic
- write a Faust application that processes data from the topic
- extract subscriber_id,session_id,connected_at,disconnected_date,connected_time,cached_content_rx_bytes, tx_bytes, rx_bytes
- extracts the individual subscriber_id,session_id,site,visits: each visited site should have it's own entry, which means you extract the array of site visits from each session entry
- Write above extracted data to the test database in different tables:
- - device_sessions
- - site_visits
- - The extraction should utilize the parsers you wrote. - Insert a new feed with the pipeline running and take log screenshots proving the feed is processed in real-time by the pipeline.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement