Main Article Content
Inspectors (SOC) look into warnings to see if they are accurate. However, the majority of warnings are incorrect, and the amount of warnings is more than SCO's capacity to manage all awareness. As a result, malevolent intent is a possibility. It's possible that attacks and compromised hosts are incorrect. Machine learning might be used to reduce the number of false positives and increase the in a real-world setting in this article. We go through common data sources in SOC, their workflow, and how to analyse this data to build a machine learning system that works. This essay is written for two audiences. The first group consists of bright researchers who have no background in data science or computer security but who should develop machine learning algorithms for machine safety. The second category of visitors are Internet security professionals with extensive knowledge and experience in the field, but no Machine Learning experiences exist, and I'd like to build one for them. We utilise the account as an example at the conclusion of the paper to show all of the procedures from data collection to label generation, feature engineering, machine learning algorithm, and sample performance assessments utilising the computer constructed in Seyondike's SOC production.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.