There is a theoretical framework where the problem does make sense. Consider some formal language (lambda calculus, Turing machines... something like that) in which we can describe programs that spit out infinite sequences of numbers. Now given a finite sequence of numbers, consider all the programs that spit out sequences that start with those numbers. Pick the shortest one, and use it to predict what the next number will be. Almost equivalently, you can say that the next number is the one that would make the sequence have the smallest Kolmogorov complexity.
If you think up a rule that generates numbers, this method will eventually find it, if you give it enough data points.
Of course this method is a nice little theory but doesn't work in practice. You run into the halting problem and that type of nasty things.