The problem is easy. I'll outline the solution here, and then appeal to your conscience to square up afterwards.
The answer is almost immediate from the definition of Big O notation, and has nothing to do with the growth of the special case for n<20;
"f(n)=O(g(n)) if there exist a constant a>0 and N>0 such that f(n)<a*g(n) when n>N". So if you pick out some finite number e.g. 20 so that you use a different algorithm on for n < 20, then it makes no difference.
Formally, if T(n) is the cost of your new algorithm, you know that, because for n>20 it is O(n log n), there exists N>0 and a>0 such that T(n)<a*n log n for n>N. Choose M=max(20,N). Then T(n)<a*n log n for n>M, and so as there exist M>0 and a>0 such that T(n)<a*n log n for n>M, T(n)=O(n log n).