Hi list, I'm buying new machines for my lab and would like to optimize the speed with which we process large datasets. At the moment one of our projects generates 5+ GB of nested JSON with each data export. A 10-core iMac Pro takes about 20 minutes to process the raw data into something useful. I'm wondering if there's any general principle with Stata about increasing speed: if I want to improve processing speed (since we need to run this processing step often) should I increase the number of cores? the clock speed of each core? does it matter if I add a high-end GPU to the machine, or does Stata not use GPUs to boost performance? any other items that I should deal with on the hardware end? (memory isn't an issue, our machines all have 64+ GB RAM).
thanks!
Related Posts with Hardware to optimize processing large text files
How to calculate standardized mean difference using ipdmetan (two-stage IPD meta-analysis)?Hi everyone, I'm running a two-stage IPD meta-analysis by using ipdmetan: ipdmetan, study(StudyID) …
Upon saving my stata ouput as a .tex file, sigificance is disappearing. Why is this happening?hi statalist, I am having a peculiar problem while using estout to save my output as a tex file. I …
Merge based on several variables matchingHi fellow Stata users, I'm working on a population of cancer patients and for that I have two datas…
System GMM - Interpreting the outputHello, I am estimating whether the total value of stocks traded and credit to the private sector ha…
Need Help: On Constructing Some Tables with Attendance DataThis is how my Attendance data looks like: Date TeachersName Student1 Student2 Student3 Student4 …
Subscribe to:
Post Comments (Atom)
0 Response to Hardware to optimize processing large text files
Post a Comment