Hi list, I'm buying new machines for my lab and would like to optimize the speed with which we process large datasets. At the moment one of our projects generates 5+ GB of nested JSON with each data export. A 10-core iMac Pro takes about 20 minutes to process the raw data into something useful. I'm wondering if there's any general principle with Stata about increasing speed: if I want to improve processing speed (since we need to run this processing step often) should I increase the number of cores? the clock speed of each core? does it matter if I add a high-end GPU to the machine, or does Stata not use GPUs to boost performance? any other items that I should deal with on the hardware end? (memory isn't an issue, our machines all have 64+ GB RAM).
thanks!
Related Posts with Hardware to optimize processing large text files
Multilevel model (binary outcome) with spatial weight matrix (not panel data)Hi Stata forum users, Does stata have any option to incorporate a spatial weight matrix with melogi…
Fractional logit model for proportions over timeDear all, I have calculated a “Diversity Index” for a given population. Per the census website, the …
How to balance an unbalanced panel on the year variable?Hello everyone! I am fairly new to Stata and am unable to solve (perhaps) very basic problems. I am…
Fractional logit model for proportions over timeDear all, I have calculated a “Diversity Index” for a given population. Per the census website, the …
xtabond2Hello everyone, I am currently working with dynamic panel data. The model I want to estimate using …
Subscribe to:
Post Comments (Atom)
0 Response to Hardware to optimize processing large text files
Post a Comment