Simply because the Microsoft SQL (MS SQL) is user-friendly and has simple interface that is easy to understand, tends to make it just one of the most widely known database management system (DBMS) worldwide. The program however has two significant drawbacks, which at times me an users ought to seek substitute DBMS. They consist of:
- strict licensing policies
- very expensive ownership of licenses suitable for large databases
Evaluating the open-source databases is suggested as a way to cut down on price tag of ownership. In which, you will find three main open-source databases including:
SQLite is a file based self-contained DBMS that is ideal for small databases, it cannot be used in multi-user environment. MySQL is much more powerful and offers enough capabilities for building complex high load databases. However, it has poor support for full text search and does not fully complied with SQL standard.
The main feature of PostgreSQL is compliance with RDBMS standards and integral object-oriented database functionality, that makes it the best option when data integrity as well as high level of dependability are needed.
Migrating database from MS SQL to PostgreSQL includes the following procedures:
- export MS SQL table definitions in a form of data definition language statements
- convert these statement according to the target format
- import the result into PostgreSQL database
- export SQL Server data into comma separated values as intermediate storage
- transform the data according to the PostgreSQL format if it is necessary
- import the result into the target database
How to Extract SQL Server Table Definitions:
- for SQL Server 2008 and earlier versions –highlight database in the Management Studio and right-click on it, then click on “Tasks” menu item and “Generate Scripts” sub menu. Verify that the “data” option is defined to false in appeared wizard’s window
- for MS SQL 2012 and later versions –highlight database in Management Studio and right-click on it, then click on“Tasks” menu item and “Generate Scripts” sub menu. Next you should click “Advanced” link on the “Set scripting options” tab and select “data only” value for “Types of data to script” in the General section.
Update the resulting script according to PostgreSQL syntax before proceeding to the next step. The following update must be applied to all SQL statements included in the script:
- MS SQL specific statements (i.e. “SET ANSI_NULLS ON”, “SET QUOTED_IDENTIFIER ON”, “SET ANSI_PADDING ON”) must be removed
- square brackets around database object names must be replaced by double quotes
- square brackets enclosing types in schema declarations must be removed
- SQL Server default schema “dbo” must be replaced by PostgreSQL default schema “public”
- all MS SQL optional keywords that are not supported by PostgreSQL (i.e. “WITH NO CHECK”, “CLUSTERED”) must be removed
- update all non-supported data types (for example “DATE TIME” becomes “TIMESTAMP”, “MONEY” becomes NUMERIC(19,4), “INT IDENTITY(…)”becomes “SERIAL”)
- remove all references to file group (i.e. “ON PRIMARY”)
- SQL Server query terminator “GO” must be replaced with the PostgreSQL equivalent”;”
The next phase is to process the data. It can be handled in MS SQL Management Studio as follows:
- highlight the database and right-click on it, then click “Tasks” menu item and “Export Data” sub menu
- browse through the wizard and choose “Microsoft OLE DB Provider for SQL Server” as data source, and “Flat File Destination” as destination.
After export is done the data can be found in the destination file stored according to comma-separated values (CSV) format.
Workaround needs to be applied to MS SQL to PostgreSQL migration if a few of the tables contain binary data. To achieve this, browse through the wizard page and click on the “Write a query to specify the data to transfer” option. This wizard page can also be referred to as the “Specify Table Copy or Query”. On the next wizard page referred to as “Provide a Source Query”, create the following SELECT-query:
select non-binary-field#1, non-binary-field#2, cast(master.sys.fn_varb into hexstr(
cast(binary-field-name as var binary(max))) as var char(max)) as binary-field-name
The query above may cause an infinite hang on big binary data, which makes this technique not usable for volumes 1 MB and above.
The most efficient approach to loading CSV file into PostgreSQL table:
- Use the “COPY” as follows: COPY table-name FROM path-to-csv-file DELIMITER ‘,’ CSV;
- Try the “\COPY” command if you get a “Permission denied” error message with the “COPY” command.
The multi-steps algorithms explored above prove that MS SQL to PostgreSQL database migration is a sophisticated procedure requiring a lot of time and efforts. You will find better technology currently available to help you convert and migrate data between two DBMS within few clicks, and also, the MS SQL-to-PostgreSQL is among those tools. Intelligent Converters, the software vendor working with database conversion and synchronization algorithms since 2001, has built this MS SQL to PostgreSQL migration tool.
The application offers high performance of the database migration due to direct connection to both MS SQL and PostgreSQL databases, without ODBC drivers or any other middle ware components. Also, it provides command line support for scripting, automating and scheduling the conversion process.