Hi list, I'm buying new machines for my lab and would like to optimize the speed with which we process large datasets. At the moment one of our projects generates 5+ GB of nested JSON with each data export. A 10-core iMac Pro takes about 20 minutes to process the raw data into something useful. I'm wondering if there's any general principle with Stata about increasing speed: if I want to improve processing speed (since we need to run this processing step often) should I increase the number of cores? the clock speed of each core? does it matter if I add a high-end GPU to the machine, or does Stata not use GPUs to boost performance? any other items that I should deal with on the hardware end? (memory isn't an issue, our machines all have 64+ GB RAM).
thanks!
Related Posts with Hardware to optimize processing large text files
Creating age-adjusted group-level median income variableGreetings, I'm running Stata 15.1 on OSX. My ultimate goal is to create a graph that displays the a…
Panel Dynamic Reg (xtabond) main coefficients are all insignificant, only year dummies are significant. What am I doing wrong?Hi, I am a student and for a class project, I am running panel data analysis, and now am in the dyna…
spmap proportions overlayDear All, my question is a follow up to this thread: https://www.stata.com/statalist/arch.../msg011…
Do I need to control additonal time-invariant characteristics in DID?All, Today I received the comments that I need to run the models e.g., teacher- or school-fixed, in…
Need alternative to using the total command with matrix var = [e(b)]Hello, I provided some code below, but briefly explain my issue first. I am creating tables in Sta…
Subscribe to:
Post Comments (Atom)
0 Response to Hardware to optimize processing large text files
Post a Comment