![]() ![]() I've tried a variation of this that writes the entire small list item but that ends up putting each small list into one cell in the first column. Writer = csv.writer(csvfile, delimiter='|')įor row in listOfRows: # Uses nested loops to access each item within The code I have tried looks similar to: with open("testFile.csv", "w") as csvfile: So each row would utilize columns A-AB for one list. Ideally what I need to accomplish is each small list within the large list being on its own row, with each item in the small list being in it's own cell on that row. The issue I'm having is that when I try to write this data back to a csv file I'm either only able to put the entire list as a string into the very first column on each row (meaning it looks exactly like what I've included above) or put each item on it's own row. My delimiter for csv files is the | character. The first two items in each list is something like A 1, A 2, B 1, etc. It's worth pointing out that each of those items is a string that contains anything from \n characters, nothing at all, commas, etc. Each of these small lists looks something like this: I've got a massive list that contains around 150 lists of length 28. Second, you have to make a csv.writer that send your list of data to the output file every time you want to write to. Then, call the function with something like this: layer = "C:\my_folder\my_layer.shp" #can also be a path to a featureclass in a Geo-DatabaseĮxport_outfile = "C:\my_folder\my_output.So I've got an interesting problem that's holding me back. You do this by using the open() function. 2.1 Export Data Frames 2.2 Export Dictionaries 2.3 Export Lists 2.4 Export Strings 2.5 Export SQL Tables 3 Import 3.1 Import Comma-Separated Values (CSVs) 3.2 Import Tab-Separated Values (TSVs) 3.3 Import Tab-Separated Values (TABs) 3.4 Import MATLAB Data Files 3.5 Import Plain Text Files 3.6 Import Numpy Objects 3.7 Import HTML 3. Use \t instead of if you want tab delimited output. In the line after the first with statement, you can choose the delimiter. With arcpy.da.SearchCursor(infile,field_names) as cursor: #-now we make the search cursor that will iterate through the rows of the table #-write all field names to the output file #-first lets make a list of all of the fields in the tableįield_names = ĭw = csv.DictWriter(f,field_names,delimiter=' ') Import csv #if you have unicode characters in your table, use: import unicodecsv as csvĮxports a feature classes table to a txt file. That additional information is for the posterity of future folks looking over this question for an answer that suits their needs. ![]() After importing back into a csv file, the table names can easily be restored with a text or spreadsheet editor, for instance Notepad, Gedit or Excel. It loads the dbf table which can be edited, but if your column name or data widths exceed the shapefile/dbf limit then the data will be truncated. Use the Qspatialite plugin instead to load sqlite databases, and right click from Qspatialite to load into LAYERS for QGIS editing.Īlternately, you can right click on the table.csv file under your QGIS 1.8 LAYERS, export to shapefile, then load "vector" file, changing the file extension to ".*" to see ALL files available, including dbf without associated shapes. In QGIS 1.8, DONT export or import into sqlite or spatialite directly from under LAYERS, via right-clicking. ![]() The workaround is to import the csv file into a db.sqlite table using QGIS's Qspatialite or Spatialite_GUI etc., and then edit the table and export that data back into a table.csv file, if necessary. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |