In general, malloc() can/will leave you with your memory very cut and sliced up. In these cases, when a programmer makes many thousands of calls to malloc() one should consider using/creating a memory mgmt library to coalesce these disparate memory segments into fewer/larger segments. This means writing/finding a library that would pre-allocate larger chunks (say roughly the size of a thousand single-line mallocs) - and then deliver pointers from that pool as pointers to your individual lines. This isn't that big/hard a concept, and can change what you're dealing with dramatically. I've had to implement these myself in bygone years.
The problem centers around the fact, that as you cut up/add/release memory - it becomes harder and harder for your OS to find a continguous segment large enough to hold your data. So, it must search for a spot to land your malloc(). When you restart your app, or bounce the server - you've freed up the main memory into larger more contiguous pieces, and it's easier for the OS to find you a spot for your malloc().
Lastly, consider NOT pulling data from Oracle a single line at a time via cursor - this is very slow compared to pulling all rows, subsets of rows, etc. Takes some tuning, but you could easily see a thousand fold increase vs line-at-a-time for program loads, depending on lots of parameters, etc. of course.
TL/DR; Just use larger chunk sizes for your malloc() operations, and allocate space within these pre-allocated chunks yourself to your pointer list via your own calls (e.g. make a library called "mymalloc()" that you can use to generate memory for your lines of returned data).