We use cookies on our website to provide you with the best user experience. If you're happy with this please continue to use the site as normal.
For more information please see our Privacy Policy.
Apache Spark is the big kid on the Data Science block right now. Spark can help with batch processing, Machine Learning algorithms and interactive low-latency Data Mining and it works well with many existing Big Data tools, which are frequently used in enterprise data pipelines.
There's a roster of state-of-the-art Big Data processing tools you should also be thinking about...
Message bus for data ingestion. Especially useful when analyzing streaming input data, not required when working with batch data.
Natural language processing search engine. Can be used for text search as well as a range of natural language processing tasks. Can provide very useful analytics for FCA when working with text and NLP.
AWS managed OLAP reporting database. Good choice for large data analytics. Can be used together with Amazon Spectrum to scale to very large size while keeping the cost under control.
Fast NoSQL database. Used in real-time processing pipelines, such as real-time Fraud detection.
"data lake" storage, stores raw data in native format on the massive scale
* Jupyter or Zeppelin - helps to share the knowledge and the findings across the teams. Can be used for ad-hoc experimentation and reporting.
* Jupyter works well with Python Machine Learning libraries thus allows to easily bridge the data storage, data analysis and Machine Learning worlds.