Home Forums Hive / HCatalog Handling of quoted csv format (through Sandbox web interface)

This topic contains 1 reply, has 2 voices, and was last updated by  tedr 1 year, 2 months ago.

  • Creator
  • #16161

    David Turnbull

    Note that I’m working with Sandbox simply as a learning exercise – this is not a production issue for me.

    Within the Sandbox web interface, I created an HCatalog table for a file with a quoted csv format. By this I mean “ABC”,”DEF”,”1.0″…etc.

    The actual data contains all fields including those with numeric values enclosed in double-quotes.

    I’m finding that if I define these numeric columns are anything other than string type (e.g. float or int), the resulting table displays the values as NULL, presumably because the quotation marks are taken as an indication that these can only be string values.

    When choosing the column delimiter within the web interface, I tried selecting the checkbox for Excel style functionality ( I forget the exact description). Within the data preview, this caused all the values to be shown without the quotes, but after the table was defined the browse function still showed NULL for any fields defined as type float or int.

    Is there any way to have HCatalog cast these “string” values as numeric type fields?

    Is this an issue with Sandbox (i.e. GUI provides access to commonly used functions, but not all), or is this a current limitation of the current release of HCatalog or Hive?

    I looked at documentation at http://incubator.apache.org/hcatalog/docs/r0.5.0/index.html, but couldn’t find anything about how delimiter or file formats are to be specified. There was some information for Hive, but not addressing this. I also checked the issues.apache.org, but couldn’t see anything on this.

Viewing 1 replies (of 1 total)

You must be logged in to reply to this topic.

  • Author
  • #16206


    Hi David,

    You have bumped into a limitation of the current version of Hive/Hcatalog, double quoted data is always interpreted as strings. The easiest workaround for this would be to strip the double quotes from the data files before loading it into hadoop. You could do this with a "sed -i ‘s/\"//g’ your-data-file".

    I hope this helps,

Viewing 1 replies (of 1 total)