Advertisement
Guest User

Untitled

a guest
May 24th, 2019
103
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.21 KB | None | 0 0
  1. Alternative data (AltData) is data from sources that are considered non-traditional for the concrete industry. That means that alternative data depends on what kind of traditional data sources are already used by you and your competitors. The purpose of this is that analyzing AltData to identify some unique insights and actions beyond those provided by regular or traditional data. As a result, your company could develop a strong competitive differentiator for an amount of time. [2: www.import.io]
  2.  
  3. It’s simple, really: To beat the market, just have insights before everyone else. [3]
  4.  
  5. If to speak about financial market according to the Eagle Alpha alternative data report it is possible to define 24 categories of AltData. See picture below:
  6. 24 Categories of Alternative Data [ 1 ]
  7.  
  8. But “Having a wealth of data is great, but only if you really believe it is going to improve your ability to forecast and capture market inefficiencies or risk premia. Available information is not synonymous with useful information.”
  9.  — Ray Iwanowski, Managing principal, and co-founder Secor Asset Management LP [3]
  10. The QUIZ
  11.  
  12. Using alternative data like tweets, web traffic, and Google trends could we predict the long or the short signal for the stock market price of the X company?
  13. STAGE 1
  14.  
  15. Step 1: Import files in the workspace
  16.  
  17. Uploading the datasets using IO library in the main workspace Google Colab Notebook from the local device. Then the Pandas library helps us to read them.CSV files and put them in a DataFrames.
  18. Import and read files with data
  19.  
  20. Step 2: Engineering each dataset and its features
  21.  
  22. For example, we take the web traffic dataset and drop the columns that we don’t need or that are not informative for us in this situation. And because in the next step we will merge a few of this datasets and “date” columns will be the join column reference, we need to transform the data type of the columns “date” to be the same. So it contains the time that we don’t need and the second reason is that the datasets are collected from different sources and the “date” columns could have different date formats and we change them to be similar.
  23. Drop the uninformative columns
  24.  
  25. Plot the web traffic numbers in a graph.
  26. Web Traffic Graph
  27.  
  28. We see in the graph above that there is some strange point in around 60000 and 70000. Seams to be outliers point and are good to delete them because it could modify the good numbers from the dataset in the future processing of it.
  29.  
  30. As we see the majority of point a below 30000 so we delete all number from the dataset that is bigger the 30000.
  31. New Web Traffic graph
  32.  
  33. Step 3: Merge few datasets in one big dataset
  34. Join four datasets together
  35.  
  36. After the joining process, we see the periods of time of collected data are very different. And only 24 days there are values on all of them. Unfortunately, we can’t fill the NaN values because they are not missed randomly.
  37. Range of dates in every dataset
  38.  
  39. So, let’s take only that small part of the dataset that contains the values and where are less missed values.
  40.  
  41. How do we see there final dataset have a shape of 24 rows and 16 columns?
  42.  
  43. Step 4: Plotting data
  44.  
  45. We see in the intersection of stock prices ‘Open, High, Low, Close’ with ‘count_comments’ indexes between 0.82–0.90, that say about a strong positive correlation. Let’s graph them and see what is there!
  46. The positive correlation between data, but wrong information
  47.  
  48. WOW! We found a perfect correlation between the number or count comments and price in the market. But BE AWARE, let’s think the price in the future even will go up, sometimes could decrease more or less, but a count comment is a cumulative number so always these values will increase. So is a graph with misinformation.
  49.  
  50. Plot a few more graphics in willing to find something that could say some information, but nothing informative
  51.  
  52. Stage 1 conclusion
  53.  
  54. Sometimes when we deal with alternative data we can face o lot of challenges: a lot of missing data, small datasets, etc. and my situation is the case. So the initial datasets are too different, some too big others too small, as a result, we could extract only a small dataset and try to do a simple analysis to find something informative.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement