In order to generate moving averages and other technical analysis, I’m loading a sizable chunk of the history into memory before working through it. For the sake of experiment, I loaded in all 5,000,000 data points (timestamp, price, volume) into memory from the CSV. Well, more properly; I tried to load all 5m data points. Node.js choked out at ~1.5m, using 2.5GB of RAM, and slowed to a mere crawl. C++ fared much better doing a very similar operation; all 5m data points loaded in about 30 seconds while only taking ~0.75GB of RAM. This gives me a lot of hope, since I was originally planning to run the averages with CUDA. Using CUDA would naturally limit my memory capacity to the size of the video card (3.5GB usable on an Nvidia GTX970).
For those interested, both file operations were nearly identical; using built in methods from Node.js and C++ respectively to read file streams into lines, then entries which were parsed into ints/floats.