The import feature can be used to import data from an external source into Unanet. These import files can be used, for example, to electronically load project, personnel, or assignment information (and more) that is available in a separate external system. The imports can also be used in conjunction with the export feature, to effect bulk updates (i.e., export data, manipulate it, re-import it).
All of the imports are available to users having the Administrator role, with certain imports available to other roles. See the List of Imports below for more information as to which other roles have access to specific imports.
In addition, Administrators can give Data Import User roles access to specific imports on the Admin Setup >> Miscellaneous >> Data Import/Export Template Access page. Data Import Users will only have access to those imports from this page.
Topics covered on this help page include:
Command Line Import (facilitates unattended imports)
Integration Management Utility (facilitates with the automation of inbound interfaces)
You may also be interested in:
The following entities can currently be populated via the Unanet imports.
-- Administrator -- All of the imports are available to users having the Administrator role.
-- P&R Administrator -- With a few exceptions, all imports are available to users having the P&R Administrator role. Those exports where that role cannot access the menu option will be marked with a double asterisk **.
Invoice (located under the Financials - Invoicing & Posting submenu)
Check out the specific import help pages for more information about any options that may be available for each type.
You can use the same file to insert and update data in Unanet. As each entry in an import file is being processed, if the entry does not exist, it will be inserted as a new record. If the entry already exists, update logic will be invoked. As a general rule when updating data via an import, empty values in the import file are ignored, that is, the value in the database is preserved (rather than being deleted).
Mass Updates
If you need to make mass changes to your data, you may find it useful to use the Export and Import features to help facilitate the changes. In some instances, if may be easier to export your data, apply the changes directly to the exported data (e.g. via Excel), and then re-import the modified file. Many of the system imports have a corresponding export capability.
You must either supply the data in the default field order sequence (as defined in each import help page), or use the appropriate column header tag in which case you can import only those columns of interest in any order you desire.
When using the column header tag approach, be sure the first character on that line is an asterisk and this should be the first row in the file.
Note that fields are either always required, conditionally required, or optional.
If you don't want to import information for a non-required field, just leave that field blank.
If you would like to blank out the value of a non-required field, you can supply the !BLANK! value in that field.
If a given record does not contain a value for every field, the import will assume the remaining values are blank (on Adds) and assume no change (on Updates).
The file to import must be saved in a comma delimited format. The fields can be enclosed in double quotes -- which would be particularly necessary should the data being imported contain commas.
Examples:
field1,field2,field3
"field1","field2","field3"
field1,"data containing commas (like this), should be double quoted",field3
When using the provided Excel import templates, you'll notice that the first row in the file is a header record, listing all column headings. So long as you prefix the first record with an asterisk ( * ), the import will override the default record layout and assume the file contains only those columns included in the header row. If the header record is not the first non-comment field, it will be ignored. If the first non-commented field is determined to be a header record, any unrecognized column header values will result in the entire file not being imported.
Column header values are not case sensitive; you can use upper, lower or mixed case.
Valid values to indicate the first record is a header include:
* - asterisk
"* - double quote asterisk
Column header values can optionally be enclosed in double quotes like other fields (but not required to be enclosed in double quotes).
You can include comments within your import files by prefixing the row with a pound # sign. When any row begins with a # character, the contents of the entire row are treated as a comment and no attempt to import the data on that row will occur.
A number of the imports support the use of special tags (e.g., !UPDATE!, !RENAME!, !BLANK!, !DELETE!, !APPEND!). The help pages identify which fields support the usage of particular tags. These tags are not case sensitive, and you can use upper, lower or mixed case, however, they must be enclosed between the exclamation points and must be the first item within a particular field.
If the data included in a particular field contains a comma character, you will need to enclose the entire field in a set of double quotes, as the comma is ordinarily interpreted as a field delimiter (when working with .csv files).
For example, if you were attempting to import a task having the following task name:
MyCo, Inc.
You would include the following in this field (when editing the data via Excel).
MyCo, Inc.
Do note that when you save data using Excel as a .csv format, Excel will automatically add the enclosing double quotes around any field containing a comma and thus viewing the resulting .csv file using a text editor you would see:
"MyCo, Inc."
As such, you do not need to add the double quotes in manually if you are editing the file using Excel. If, however, you are generating the file via an external process or manually creating the file using a text editor, you would need to supply the enclosing double quotes,
If the data included in a particular field contains a double quote character, you will need to take additional steps as the double quote is ordinarily interpreted as a field delimiter (when working with .csv files). You will need to take the following measures so that the import program understands that the embedded double quote is not a delimiter but instead just part of the data being imported for a given field:
For example, if you were attempting to import a task having the following task name:
Task "One" Comes First
You would include the following in this field (when editing the data via Excel).
"Task ""One"" Comes First"
Do note that when you save data using Excel as a .csv format, Excel will add additional double quotes around any single (or string of contiguous) double quotes and thus viewing the resulting .csv file using a text editor you would see:
"""Task """"One"""" Comes First"""
For each of the available imports listed above, the Unanet product comes with a built in export template that matches the import layout. With these export templates available, it is possible to affect mass data changes to your system by exporting the data you wish to change, modifying the resulting text file, then re-importing the file.
As an alternative to using the import screens, you can also invoke imports programmatically using the command line import utility. This functionality allows you to invoke each import without accessing the Unanet system application via the user interface. This may be useful if you would like to create your own process external to Unanet that could extract data from an upstream system and programmatically load it into Unanet (without manual intervention) -- e.g. a nightly load.
When regularly importing a number of files into Unanet, you may be interested in using the Integration Management Utility, which is a stand-alone utility that helps manage the various steps involved in an inbound interface to Unanet. This utility can be configured to kick off the generation of data from an upstream system, the steps necessary to invoke one or many Unanet imports, error parsing and associated reporting via email, file archival, etc.
If you use the Unanet imports on a regular basis to keep your Unanet data in sync with an external system, and do not have the ability to export only new transactions from the external system, you might be feeding Unanet with a complete reload of data with each run. In this case, this type of utility may be useful, for example, to compare the contents of yesterday's load file with today's load file in order to identify those new or changed records. This could be a useful step if you have very large data volumes. Check out the ImportDiff Utility for more information.
If you are planning on running very large import files (which can take a while to process), you may want to make sure you have your system parameters tuned appropriately (to improve run time performance and decrease system timeouts). Check out the Tuning Tips help page for additional information.
Note: While you can run imports while the system is being used, you may want to schedule larger imports for non-peak usage times, and you may want to avoid running imports while you are running system exports (simply to reduce the possibility of database contention).