@pooja16 wrote:
We have time series data, where each data is segregated by # of days in a window.
In each window we have probability of event occurring. The event can be categorized as +ve or -ve but ultimately probability sums up to 1.
For ex.
The time series is divided by two week window
time +ve event -ve event W1 0.7 0.3 W2 0.4 0.6 W3 0.2 0.8 W4 .... ....
Now, we want to predict the probability of +ve event occurring/-ve event occurring in window 4
Couple questions:
How can we use Naive Bayes or other algorithms which use probability at it’s core?
Are there any other suitable Machine Learning Model for such class of problems?
I would appreciate, if I get sample code
Posts: 1
Participants: 1