Hi list, I'm buying new machines for my lab and would like to optimize the speed with which we process large datasets. At the moment one of our projects generates 5+ GB of nested JSON with each data export. A 10-core iMac Pro takes about 20 minutes to process the raw data into something useful. I'm wondering if there's any general principle with Stata about increasing speed: if I want to improve processing speed (since we need to run this processing step often) should I increase the number of cores? the clock speed of each core? does it matter if I add a high-end GPU to the machine, or does Stata not use GPUs to boost performance? any other items that I should deal with on the hardware end? (memory isn't an issue, our machines all have 64+ GB RAM).
thanks!
Related Posts with Hardware to optimize processing large text files
changing several currencies over multiple years to EUROHey, I am using stata (for the first time) for my thesis. At the moment I have got a large database …
Quantifying serial correlation in a small-T panelHi all, I have a (very) large N and small T panel. For convoluted reasons, I would like to quantify…
MNAR data and factor analysis/latent variable analysisDear community, I hope it is alright that I am asking some quite basic questions. I have collected …
Bootstrap validation - Multivariable regression modelHello I would like to perform bootstrap validation of a multivariable regression model, such as: …
How to write correctly the panel data regression in the command Stata?Good morning to the community. I've read a lot of books and I've seen a lot of videos on panel data …
Subscribe to:
Post Comments (Atom)
0 Response to Hardware to optimize processing large text files
Post a Comment