@shubajit wrote:
I am designing a machine learning system which consists a bunch of classifiers (each output a confidence score between 0 to 1). Some classifiers consume output from other classifiers as features. Now suppose I retrained a classifier X whose output is input to some other classifier Y, with some new training data. The problem I am facing is since the classifier X’s score distribution is changing due to retraining , classifier Y will not behave as expected since it is learnt on a different(old) score distribution of classifier X. This will have an avalanche effect forcing me to retrain a lot of classifiers down the stack. Is there a way to design a system where you can retrain a classifier standalone with out requiring to retrain the dependent classifiers?
Posts: 2
Participants: 2