During my master studies, I had to take a course on Research Methodology which introduced me to an interesting concept of information management. With our limited brain capacity, more you read papers about others ideas, less space is available for your own ideas and enforce other assumptions, vision and models.
In practice, great researcher are aware of this consequence of the no free lunch theorem and try to keep of good balance of papers reading and research exploration. By simply applying the contrastive divergence concept of your approach, you can gage your distance to the trend and get a estimation of the impact of a possible discovery.
The Research Machine Learning community, as most other community, has the tendency to recruit top grades students that are use to follow exactly the line of thoughts of their teachers. This long training process is, according to me, extremely damaging to the training of the researcher capacity (i.e.: suboptimal cost function). This explain why most of the master student are researcher cheap labor driving force because they can only experiment others ideas with minor contributions.
Top researchers allow their formal students to follow their own line of thoughts or if they have no specific ideas, suggest ideas. I won't have done a research master without this freedom, thanks Yoshua!
So, if you want to impact the most your community, limit the number of papers your are reading, make your own ideas and play with your concepts to train your own intuitions of the unknown guiding rules you are looking for.
You might say, what's your contributions, I haven't heard about it. My contribution is that I have build experimental proof of back-propagation optimization fundamental problems and build the skeleton of top level explanations. Usually, we don't publish this type of results until you find a solution to the problem which, unfortunately, I haven't reach but, it is coming slowly; it is a long process and I learned to be patient.
So, if you want to impact the most your community, limit the number of papers your are reading to ensure you don't constrain yourself to others models. Why using a parametric model that limits your solution exploration space?
You can trust the collective research discovery learning process that ensure the evolution of the human kind because someone will find it or, use it to increase the likelihood your will make a important discovery (i.e.: use it as a contrastive divergence cost function). If everyone was applying this strategy or cost function, I am pretty sure we will evolve faster. In order to move to this step, we will need to encourage failure strategy publications to ensure other don't wast time reproducing the same ideas but this could be elaborated in another thread post that involve a society evolution.
Traders knows about this simple strategy, buy low, sell high, don't follow the trend, take risks.
In practice, great researcher are aware of this consequence of the no free lunch theorem and try to keep of good balance of papers reading and research exploration. By simply applying the contrastive divergence concept of your approach, you can gage your distance to the trend and get a estimation of the impact of a possible discovery.
The Research Machine Learning community, as most other community, has the tendency to recruit top grades students that are use to follow exactly the line of thoughts of their teachers. This long training process is, according to me, extremely damaging to the training of the researcher capacity (i.e.: suboptimal cost function). This explain why most of the master student are researcher cheap labor driving force because they can only experiment others ideas with minor contributions.
Top researchers allow their formal students to follow their own line of thoughts or if they have no specific ideas, suggest ideas. I won't have done a research master without this freedom, thanks Yoshua!
So, if you want to impact the most your community, limit the number of papers your are reading, make your own ideas and play with your concepts to train your own intuitions of the unknown guiding rules you are looking for.
You might say, what's your contributions, I haven't heard about it. My contribution is that I have build experimental proof of back-propagation optimization fundamental problems and build the skeleton of top level explanations. Usually, we don't publish this type of results until you find a solution to the problem which, unfortunately, I haven't reach but, it is coming slowly; it is a long process and I learned to be patient.
So, if you want to impact the most your community, limit the number of papers your are reading to ensure you don't constrain yourself to others models. Why using a parametric model that limits your solution exploration space?
You can trust the collective research discovery learning process that ensure the evolution of the human kind because someone will find it or, use it to increase the likelihood your will make a important discovery (i.e.: use it as a contrastive divergence cost function). If everyone was applying this strategy or cost function, I am pretty sure we will evolve faster. In order to move to this step, we will need to encourage failure strategy publications to ensure other don't wast time reproducing the same ideas but this could be elaborated in another thread post that involve a society evolution.
Traders knows about this simple strategy, buy low, sell high, don't follow the trend, take risks.