![]() ![]() # done when having read a consecutive series of rows Yield next(datareader) # yield the header row If you need to filter the data first, use a generator function: import csv You are reading all rows into a list, then processing that list. My computer has 8gb RAM, running 64bit Windows 7, and the processor is 3.40 GHz (not certain what information you need). How can I manage to get this to work with the bigger files? The reason for the else clause in the getstuff function is that all the elements which fit the criterion will be listed together in the csv file, so I leave the loop when I get past them to save time. My code looks like this: def getdata(filename, criteria):ĭata.append(getstuff(filename, criteron)) I can do this (very slowly) for the files with under 300,000 rows, but once I go above that I get memory errors. csv files in Python 2.7 with up to 1 million rows, and 200 columns (files range from 100mb to 1.6gb). ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |