{"id":54247,"date":"2023-03-29T10:50:23","date_gmt":"2023-03-29T10:50:23","guid":{"rendered":"https:\/\/dbtut.com\/?p=54247"},"modified":"2023-03-29T10:58:17","modified_gmt":"2023-03-29T10:58:17","slug":"import-data","status":"publish","type":"post","link":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/","title":{"rendered":"Import Data"},"content":{"rendered":"<p>In today&#8217;s article, I will cover you about the Data Import process and the parts to pay attention to in this process.<\/p>\n<h3>IMPORT PROCESS<\/h3>\n<p>1. We drop the table we exported from the Scott schema.<\/p>\n<pre class=\"lang:default decode:true \">SQL&gt; drop table scott.emp;\r\n<\/pre>\n<p>2. We start the import process with the Impdp tool.<\/p>\n<pre class=\"lang:default decode:true \">-bash-3.2$ impdp emre\/EMRE.123 directory=dump_dir dumpfile=emre.dmp logfile=emre_imp.log\r\n<\/pre>\n<p id=\"RnKFfpu\"><img loading=\"lazy\" decoding=\"async\" width=\"1174\" height=\"610\" class=\"size-full wp-image-54248 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_64241060e6582.png\" alt=\"\" \/><\/p>\n<h3>Points to Consider in the Import Process<\/h3>\n<p>1. First of all, it is learned whether there is enough space in the DB to be imported.<\/p>\n<p>2. In the DB where we will import the data, the user of the table we exported must be found. Otherwise we will get an error like below.<\/p>\n<pre class=\"lang:default decode:true \">-bash-3.2$ impdp oardahanli\/Brc_Onr2516* directory=DATA_PUMP_DIR dumpfile=web_account.dmp\r\n\r\nImport: Release 11.2.0.4.0 - Production on Fri Jun 19 13:18:23 2015\r\n\r\nCopyright (c) 1982, 2011, Oracle and\/or its affiliates.  All rights reserved.\r\n\r\nConnected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production\r\nWith the Partitioning, OLAP, Data Mining and Real Application Testing options\r\nMaster table \"OARDAHANLI\".\"SYS_IMPORT_FULL_01\" successfully loaded\/unloaded\r\nStarting \"OARDAHANLI\".\"SYS_IMPORT_FULL_01\":  oardahanli\/******** directory=DATA_PUMP_DIR dumpfile=web_account.dmp\r\nProcessing object type TABLE_EXPORT\/TABLE\/TABLE\r\nORA-39083: Object type TABLE:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" failed to create with error:\r\nORA-01918: user 'WEBTELMWCORE' does not exist\r\nFailing sql is:\r\nCREATE TABLE \"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" (\"ACC_UID\" NUMBER, \"NOTIF_DATE\" DATE) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_\r\nProcessing object type TABLE_EXPORT\/TABLE\/TABLE_DATA\r\nProcessing object type TABLE_EXPORT\/TABLE\/GRANT\/OWNER_GRANT\/OBJECT_GRANT\r\nORA-39112: Dependent object type OBJECT_GRANT:\"WEBTELMWCORE\" skipped, base object type TABLE:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" creation failed\r\nProcessing object type TABLE_EXPORT\/TABLE\/INDEX\/INDEX\r\nORA-39112: Dependent object type INDEX:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION_PK\" skipped, base object type TABLE:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" creation failed\r\nProcessing object type TABLE_EXPORT\/TABLE\/INDEX\/STATISTICS\/INDEX_STATISTICS\r\nORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION_PK\" creation failed\r\nProcessing object type TABLE_EXPORT\/TABLE\/STATISTICS\/TABLE_STATISTICS\r\nORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" creation failed\r\nJob \"OARDAHANLI\".\"SYS_IMPORT_FULL_01\" completed with 5 error(s) at Fri Jun 19 13:18:27 2015 elapsed 0 00:00:02\r\n<\/pre>\n<p>The following query is run to remove the error.<\/p>\n<pre class=\"lang:default decode:true \">CREATE USER WEBTELMWCORE IDENTIFIED BY \"webtelmwcore\"DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP;\r\nGRANT CREATE SESSION TO WEBTELMWCORE;\r\nGRANT RESOURCE TO WEBTELMWCORE;\r\n<\/pre>\n<p>2. In the DB from which we will import the data, the tablespace from which we exported must have a tablespace. If not, we will get an error as below.<\/p>\n<pre class=\"lang:default decode:true \">-bash-3.2$ impdp oardahanli\/Brc_Onr2516* directory=DATA_PUMP_DIR dumpfile=web_account.dmp\r\n\r\nImport: Release 11.2.0.4.0 - Production on Fri Jun 19 13:23:45 2015\r\n\r\nCopyright (c) 1982, 2011, Oracle and\/or its affiliates.  All rights reserved.\r\n\r\nConnected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production\r\nWith the Partitioning, OLAP, Data Mining and Real Application Testing options\r\nMaster table \"OARDAHANLI\".\"SYS_IMPORT_FULL_01\" successfully loaded\/unloaded\r\nStarting \"OARDAHANLI\".\"SYS_IMPORT_FULL_01\":  oardahanli\/******** directory=DATA_PUMP_DIR dumpfile=web_account.dmp\r\nProcessing object type TABLE_EXPORT\/TABLE\/TABLE\r\nORA-39083: Object type TABLE:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" failed to create with error:\r\nORA-00959: tablespace 'WEBTVNEW' does not exist\r\nFailing sql is:\r\nCREATE TABLE \"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" (\"ACC_UID\" NUMBER, \"NOTIF_DATE\" DATE) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLAS\r\nProcessing object type TABLE_EXPORT\/TABLE\/TABLE_DATA\r\nProcessing object type TABLE_EXPORT\/TABLE\/GRANT\/OWNER_GRANT\/OBJECT_GRANT\r\nORA-39112: Dependent object type OBJECT_GRANT:\"WEBTELMWCORE\" skipped, base object type TABLE:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" creation failed\r\nProcessing object type TABLE_EXPORT\/TABLE\/INDEX\/INDEX\r\nORA-39112: Dependent object type INDEX:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION_PK\" skipped, base object type TABLE:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" creation failed\r\nProcessing object type TABLE_EXPORT\/TABLE\/INDEX\/STATISTICS\/INDEX_STATISTICS\r\nORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION_PK\" creation failed\r\nProcessing object type TABLE_EXPORT\/TABLE\/STATISTICS\/TABLE_STATISTICS\r\nORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:\"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" creation failed\r\nJob \"OARDAHANLI\".\"SYS_IMPORT_FULL_01\" completed with 5 error(s) at Fri Jun 19 13:23:48 2015 elapsed 0 00:00:01\r\n<\/pre>\n<p>The following query is run to remove the error.<\/p>\n<pre class=\"lang:default decode:true \">CREATE TABLESPACE WEBTVNEW DATAFILE \r\n'\/oracle\/ora11g\/data_ONURDB\/ONURDB\/webtvnew01.dbf' SIZE 1024M AUTOEXTEND ON NEXT 1024M MAXSIZE UNLIMITED\r\nLOGGING\r\nONLINE\r\nPERMANENT\r\nEXTENT MANAGEMENT LOCAL AUTOALLOCATE\r\nBLOCKSIZE 8K\r\nSEGMENT SPACE MANAGEMENT AUTO\r\nFLASHBACK ON;\r\n<\/pre>\n<p>3. We may get GRANT errors while importing the data. This is not so important. For example, errors like the following are received.<\/p>\n<pre class=\"lang:default decode:true \">-bash-3.2$ impdp oardahanli\/Brc_Onr2516* directory=DATA_PUMP_DIR dumpfile=web_account.dmp\r\n\r\nImport: Release 11.2.0.4.0 - Production on Fri Jun 19 13:33:46 2015\r\n\r\nCopyright (c) 1982, 2011, Oracle and\/or its affiliates.  All rights reserved.\r\n\r\nConnected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production\r\nWith the Partitioning, OLAP, Data Mining and Real Application Testing options\r\nMaster table \"OARDAHANLI\".\"SYS_IMPORT_FULL_01\" successfully loaded\/unloaded\r\nStarting \"OARDAHANLI\".\"SYS_IMPORT_FULL_01\":  oardahanli\/******** directory=DATA_PUMP_DIR dumpfile=web_account.dmp\r\nProcessing object type TABLE_EXPORT\/TABLE\/TABLE\r\nProcessing object type TABLE_EXPORT\/TABLE\/TABLE_DATA\r\n. . imported \"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\"   2.583 MB  151525 rows\r\nProcessing object type TABLE_EXPORT\/TABLE\/GRANT\/OWNER_GRANT\/OBJECT_GRANT\r\nORA-39083: Object type OBJECT_GRANT failed to create with error:\r\nORA-01917: user or role 'MW_ROLE1' does not exist\r\nFailing sql is:\r\nGRANT SELECT ON \"WEBTELMWCORE\".\"ACCOUNT_SSO_NOTIFICATION\" TO \"MW_ROLE1\"\r\nProcessing object type TABLE_EXPORT\/TABLE\/INDEX\/INDEX\r\nProcessing object type TABLE_EXPORT\/TABLE\/INDEX\/STATISTICS\/INDEX_STATISTICS\r\nProcessing object type TABLE_EXPORT\/TABLE\/STATISTICS\/TABLE_STATISTICS\r\nJob \"OARDAHANLI\".\"SYS_IMPORT_FULL_01\" completed with 1 error(s) at Fri Jun 19 13:33:50 2015 elapsed 0 00:00:03\r\n<\/pre>\n<h3>Completing the Import Process<\/h3>\n<p>1. The import process is left in the following step.<\/p>\n<p id=\"PUOwDMp\"><img loading=\"lazy\" decoding=\"async\" width=\"1582\" height=\"248\" class=\"size-full wp-image-54249 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_6424124e395c6.png\" alt=\"\" \/><\/p>\n<p>2. It seems that it stays in &#8220;executing&#8221; even though the import is stopped. We saw this with the following query.<\/p>\n<pre class=\"lang:default decode:true \">SELECT owner_name, job_name, operation, job_mode, state, attached_sessions FROM dba_datapump_jobs \r\nWHERE job_name NOT LIKE 'BIN$%' ORDER BY 1, 2;<\/pre>\n<p>3. The database is requested to be stopped.<\/p>\n<pre class=\"lang:default decode:true \">[oracle@mwdb1 ~]$ srvctl status database -d tivibudb\r\nInstance tivibudb1 is running on node mwdb1\r\nInstance tivibudb2 is running on node mwdb2\r\n[oracle@mwdb1 ~]$ srvctl stop database -d tivibudb\r\n<\/pre>\n<p>4. The database did not stop. We look at the reason from the Alert Logs and understand that the reason is FRA.<\/p>\n<p id=\"fpXnSlz\"><img loading=\"lazy\" decoding=\"async\" width=\"1230\" height=\"226\" class=\"size-full wp-image-54250 alignnone\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_642413b71a660.png\" alt=\"\" \/><\/p>\n<p><strong>NOTE:<\/strong> The reason why the database cannot be closed when the FRA field is full is because it wants to save it in Archive files to ensure the consistency of the transactions made at that moment. Because the checkpoint information in Controlfile, Datafile and Archivelog must be the same for consistent opening while opening.<\/p>\n<p>5. The database smon process is killed and forced to shut down.<\/p>\n<pre class=\"lang:default decode:true \">\t[oracle@mwdb2 ~]$ ps -ef | grep smon\r\n\troot      4911     1  0  2015 ?        00:26:09 \/u01\/app\/grid\/product\/11.2.0\/grid\/bin\/osysmond.bin\r\n\toracle    5363     1  0  2015 ?        00:00:00 asm_smon_+ASM2\r\n\toracle   12482     1  0 16:36 ?        00:00:00 ora_smon_tivibudb2\r\n\toracle   21036 12330  0 17:11 pts\/1    00:00:00 grep smon\r\n\t\r\n\t[oracle@mwdb2 ~]$ kill -9 12482\r\n<\/pre>\n<p id=\"fhnBBUd\"><img loading=\"lazy\" decoding=\"async\" width=\"486\" height=\"134\" class=\"size-full wp-image-54251 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_6424149abe141.png\" alt=\"\" \/><\/p>\n<p id=\"XniPmBU\"><img loading=\"lazy\" decoding=\"async\" width=\"1260\" height=\"134\" class=\"size-full wp-image-54252 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_642414a366a92.png\" alt=\"\" \/><\/p>\n<p>6. The database is closed. When we connect with &#8220;sqlplus \/ as sysdba&#8221;, we see that the instance is idle.<\/p>\n<p id=\"DExZnys\"><img loading=\"lazy\" decoding=\"async\" width=\"598\" height=\"196\" class=\"size-full wp-image-54253 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_642414cf76e34.png\" alt=\"\" \/><\/p>\n<p>7. The database is opened in the nomount step. [Every 2 Nodes]\n<pre class=\"lang:default decode:true \">startup nomount;\r\n<\/pre>\n<p id=\"QUAOFCs\"><img loading=\"lazy\" decoding=\"async\" width=\"380\" height=\"180\" class=\"size-full wp-image-54255 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_64241507c6a38.png\" alt=\"\" \/><\/p>\n<p>8. The deletion is performed.<\/p>\n<pre class=\"lang:default decode:true \">\trman target \/\r\n\t\r\n\tdelete noprompt archivelog all completed before 'sysdate-2';\r\n<\/pre>\n<p>9. Crosscheck is done.<\/p>\n<pre class=\"lang:default decode:true \">\tRMAN&gt; crosscheck archivelog all;\r\n\t\r\n\treleased channel: ORA_DISK_1\r\n\tallocated channel: ORA_DISK_1\r\n\tchannel ORA_DISK_1: SID=1003 instance=tivibudb1 device type=DISK\r\n\t...\r\nCrosschecked 3371 objects<\/pre>\n<p><strong>NOTE:<\/strong> The crosscheck command checks whether the backup in the RMAN repository is physical on the disk or tape. If the file does not exist physically on the disk or tape (it may have been deleted through the operating system), RMAN repository marks the relevant backup as &#8220;EXPIRED&#8221;. Afterwards, backup information can be deleted from the RMAN repository with the &#8220;DELETE EXPIRED&#8221; command.<\/p>\n<p>10. It is checked whether there is enough space.<\/p>\n<pre class=\"lang:default decode:true \">select name, free_mb, total_mb from v$asm_diskgroup;\r\n<\/pre>\n<p id=\"pwqGkWs\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"138\" class=\"size-full wp-image-54256 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_642415a24d448.png\" alt=\"\" \/><\/p>\n<p>If there is not enough space, we enter the fra field and check what can be deleted.<\/p>\n<p>We connect to ASM with &#8220;asmcmd&#8221; to see what can be deleted from FRA with RMAN.<\/p>\n<pre class=\"lang:default decode:true \">\t[root@mwdb1 ~]# su - grid\r\n\tsu: user grid does not exist [Grid kullan\u0131c\u0131s\u0131 olmad\u0131\u011f\u0131 i\u00e7in kullan\u0131c\u0131ya ge\u00e7ilemedi. Bu nedenle \"oracle\" kullan\u0131c\u0131s\u0131 ile  grid' in environment' \u0131na ge\u00e7ilir ve asm' e ba\u011flan\u0131l\u0131r.  ]\r\n\t\r\n\t[root@mwdb1 ~]# su - oracle\r\n\t[oracle@mwdb1 ~]$ grid_env\r\n\t[oracle@mwdb1 ~]$ asmcmd\r\n\tASMCMD&gt; ls\r\n\tDATA\/\r\n\tFRA\/\r\n\tASMCMD&gt; cd fra\r\n\tASMCMD&gt; ls\r\n\tTIVIBUDB\/\r\n\tASMCMD&gt; cd tivibudb\r\n\tASMCMD&gt; ls\r\n\tARCHIVELOG\/\r\n\tCONTROLFILE\/\r\n\tONLINELOG\/\r\n\tASMCMD&gt; cd ARCHIVELOG\r\n\tASMCMD&gt; ls\r\n\t2016_01_01\/\r\n\t2016_01_02\/\r\n\t2016_01_03\/\r\n\t2016_01_04\/\r\n\t2016_01_05\/\r\n\t2016_01_06\/\r\n\t2016_01_07\/\r\n\tASMCMD&gt; cd ..\r\n\tASMCMD&gt; ls\r\n\tARCHIVELOG\/\r\n\tCONTROLFILE\/\r\n\tONLINELOG\/\r\n\tASMCMD&gt; cd ONLINELOG\r\n\tASMCMD&gt; ls\r\n\tgroup_1.257.843241075\r\n\tgroup_2.258.843241075\r\n\tgroup_3.259.843241107\r\n\tgroup_4.260.843241107\r\n\tgroup_5.264.892999317\r\n\tgroup_6.265.892999317\r\n\tgroup_7.266.892999319\r\n\tgroup_8.267.892999319\r\n\tASMCMD&gt; \r\n<\/pre>\n<p id=\"OPIewlb\"><img loading=\"lazy\" decoding=\"async\" width=\"580\" height=\"110\" class=\"size-full wp-image-54257 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_642415e019b8f.png\" alt=\"\" \/><\/p>\n<p>11. We check whether the fields used in asm_diskgroup and v$recovery_file_dest are the same.<\/p>\n<pre class=\"lang:default decode:true \">select name,(space_limit\/1024\/1024) \"Size MB\",(space_used\/1024\/1024) \"Used MB\" from v$recovery_file_dest \r\norder by name;<\/pre>\n<p id=\"ZvnGWIG\"><img loading=\"lazy\" decoding=\"async\" width=\"264\" height=\"130\" class=\"size-full wp-image-54259 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_6424163d9c2e1.png\" alt=\"\" \/><\/p>\n<pre class=\"lang:default decode:true \">\t[root@mwdb1 ~]# su - oracle\r\n\t[oracle@mwdb1 ~]$ grid_env\r\n\t[oracle@mwdb1 ~]$ asmcmd\r\n\tASMCMD&gt; ls -s\r\n\tSector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name\r\n\t   512   4096  1048576    255996   101222                0          101222              0             Y  DATA\/\r\n\t   512   4096  1048576    153597   114655                0          114655              0             N  FRA\/\r\nASMCMD&gt;<\/pre>\n<p>12. If the space used is not the same, expired archive logs are deleted.<\/p>\n<pre class=\"lang:default decode:true \">\tRMAN&gt; delete expired archivelog all;\r\n\t\r\n\treleased channel: ORA_DISK_1\r\n\tallocated channel: ORA_DISK_1\r\n\tchannel ORA_DISK_1: SID=1003 instance=tivibudb1 device type=DISK\r\n\t...\r\n\tarchived log file name=+FRA\/tivibudb\/archivelog\/2015_12_31\/thread_2_seq_6773.2636.899936537 RECID=2909 STAMP=899936538\r\n\tdeleted archived log\r\n\tarchived log file name=+FRA\/tivibudb\/archivelog\/2015_12_31\/thread_2_seq_6774.2637.899940475 RECID=2910 STAMP=899940474\r\n\tDeleted 2364 EXPIRED objects\r\n\t\r\n\tORA-00245: control file backup failed; target is likely on a local file system\r\n\t\r\n\tRMAN-08132: WARNING: cannot update recovery area reclaimable file list\r\n<\/pre>\n<p>13. We mount the database.. [Every 2 Nodes]\n<pre class=\"lang:default decode:true \">alter database mount;\r\n<\/pre>\n<p id=\"asdMfJQ\"><img loading=\"lazy\" decoding=\"async\" width=\"246\" height=\"86\" class=\"size-full wp-image-54260 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_642416a0b813d.png\" alt=\"\" \/><\/p>\n<p>14. The database opens. [Every 2 Nodes]\n<pre class=\"lang:default decode:true \">alter database open;\r\n<\/pre>\n<p id=\"DJGaOXw\"><img loading=\"lazy\" decoding=\"async\" width=\"232\" height=\"78\" class=\"size-full wp-image-54261 aligncenter\" src=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/img_642416c131404.png\" alt=\"\" \/><\/p>\n<div class=\"pvc_clear\"><\/div>\n<p id=\"pvc_stats_54247\" class=\"pvc_stats all  \" data-element-id=\"54247\" style=\"\"><i class=\"pvc-stats-icon medium\" aria-hidden=\"true\"><svg aria-hidden=\"true\" focusable=\"false\" data-prefix=\"far\" data-icon=\"chart-bar\" role=\"img\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\" class=\"svg-inline--fa fa-chart-bar fa-w-16 fa-2x\"><path fill=\"currentColor\" d=\"M396.8 352h22.4c6.4 0 12.8-6.4 12.8-12.8V108.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v230.4c0 6.4 6.4 12.8 12.8 12.8zm-192 0h22.4c6.4 0 12.8-6.4 12.8-12.8V140.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v198.4c0 6.4 6.4 12.8 12.8 12.8zm96 0h22.4c6.4 0 12.8-6.4 12.8-12.8V204.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v134.4c0 6.4 6.4 12.8 12.8 12.8zM496 400H48V80c0-8.84-7.16-16-16-16H16C7.16 64 0 71.16 0 80v336c0 17.67 14.33 32 32 32h464c8.84 0 16-7.16 16-16v-16c0-8.84-7.16-16-16-16zm-387.2-48h22.4c6.4 0 12.8-6.4 12.8-12.8v-70.4c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v70.4c0 6.4 6.4 12.8 12.8 12.8z\" class=\"\"><\/path><\/svg><\/i> <img loading=\"lazy\" decoding=\"async\" width=\"16\" height=\"16\" alt=\"Loading\" src=\"https:\/\/dbtut.com\/wp-content\/plugins\/page-views-count\/ajax-loader-2x.gif\" border=0 \/><\/p>\n<div class=\"pvc_clear\"><\/div>\n","protected":false},"excerpt":{"rendered":"<p>In today&#8217;s article, I will cover you about the Data Import process and the parts to pay attention to in this process. IMPORT PROCESS 1. We drop the table we exported from the Scott schema. SQL&gt; drop table scott.emp; 2. We start the import process with the Impdp tool. -bash-3.2$ impdp emre\/EMRE.123 directory=dump_dir dumpfile=emre.dmp logfile=emre_imp.log &hellip;<\/p>\n<div class=\"pvc_clear\"><\/div>\n<p id=\"pvc_stats_54247\" class=\"pvc_stats all  \" data-element-id=\"54247\" style=\"\"><i class=\"pvc-stats-icon medium\" aria-hidden=\"true\"><svg aria-hidden=\"true\" focusable=\"false\" data-prefix=\"far\" data-icon=\"chart-bar\" role=\"img\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\" class=\"svg-inline--fa fa-chart-bar fa-w-16 fa-2x\"><path fill=\"currentColor\" d=\"M396.8 352h22.4c6.4 0 12.8-6.4 12.8-12.8V108.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v230.4c0 6.4 6.4 12.8 12.8 12.8zm-192 0h22.4c6.4 0 12.8-6.4 12.8-12.8V140.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v198.4c0 6.4 6.4 12.8 12.8 12.8zm96 0h22.4c6.4 0 12.8-6.4 12.8-12.8V204.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v134.4c0 6.4 6.4 12.8 12.8 12.8zM496 400H48V80c0-8.84-7.16-16-16-16H16C7.16 64 0 71.16 0 80v336c0 17.67 14.33 32 32 32h464c8.84 0 16-7.16 16-16v-16c0-8.84-7.16-16-16-16zm-387.2-48h22.4c6.4 0 12.8-6.4 12.8-12.8v-70.4c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v70.4c0 6.4 6.4 12.8 12.8 12.8z\" class=\"\"><\/path><\/svg><\/i> <img loading=\"lazy\" decoding=\"async\" width=\"16\" height=\"16\" alt=\"Loading\" src=\"https:\/\/dbtut.com\/wp-content\/plugins\/page-views-count\/ajax-loader-2x.gif\" border=0 \/><\/p>\n<div class=\"pvc_clear\"><\/div>\n","protected":false},"author":484,"featured_media":54262,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[4],"tags":[],"class_list":["post-54247","post","type-post","status-publish","format-standard","has-post-thumbnail","","category-oracle"],"aioseo_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.9 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Import Data - Database Tutorials<\/title>\n<meta name=\"description\" content=\"In today&#039;s article, I will cover you about the Data Import process and the parts to pay attention to in this process.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Import Data - Database Tutorials\" \/>\n<meta property=\"og:description\" content=\"In today&#039;s article, I will cover you about the Data Import process and the parts to pay attention to in this process.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/\" \/>\n<meta property=\"og:site_name\" content=\"Database Tutorials\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-29T10:50:23+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-29T10:58:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png\" \/>\n\t<meta property=\"og:image:width\" content=\"712\" \/>\n\t<meta property=\"og:image:height\" content=\"311\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Onur ARDAHANLI\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Onur ARDAHANLI\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/\"},\"author\":{\"name\":\"Onur ARDAHANLI\",\"@id\":\"https:\/\/dbtut.com\/#\/schema\/person\/7fcd466cd0d347ec64aaa48f18f780c6\"},\"headline\":\"Import Data\",\"datePublished\":\"2023-03-29T10:50:23+00:00\",\"dateModified\":\"2023-03-29T10:58:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/\"},\"wordCount\":471,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/dbtut.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png\",\"articleSection\":[\"ORACLE\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/\",\"url\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/\",\"name\":\"Import Data - Database Tutorials\",\"isPartOf\":{\"@id\":\"https:\/\/dbtut.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png\",\"datePublished\":\"2023-03-29T10:50:23+00:00\",\"dateModified\":\"2023-03-29T10:58:17+00:00\",\"description\":\"In today's article, I will cover you about the Data Import process and the parts to pay attention to in this process.\",\"breadcrumb\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#primaryimage\",\"url\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png\",\"contentUrl\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png\",\"width\":712,\"height\":311},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/dbtut.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Import Data\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/dbtut.com\/#website\",\"url\":\"https:\/\/dbtut.com\/\",\"name\":\"Database Tutorials\",\"description\":\"MSSQL, Oracle, PostgreSQL, MySQL, MariaDB, DB2, Sybase, Teradata, Big Data, NOSQL, MongoDB, Couchbase, Cassandra, Windows, Linux\",\"publisher\":{\"@id\":\"https:\/\/dbtut.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/dbtut.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/dbtut.com\/#organization\",\"name\":\"dbtut\",\"url\":\"https:\/\/dbtut.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/dbtut.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2021\/02\/dbtutlogo.jpg\",\"contentUrl\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2021\/02\/dbtutlogo.jpg\",\"width\":223,\"height\":36,\"caption\":\"dbtut\"},\"image\":{\"@id\":\"https:\/\/dbtut.com\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/dbtut.com\/#\/schema\/person\/7fcd466cd0d347ec64aaa48f18f780c6\",\"name\":\"Onur ARDAHANLI\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/dbtut.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ecd20c3e1374ced4e1aefc82101cce4cd437be8fd957d1be3d106668b8a1b990?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ecd20c3e1374ced4e1aefc82101cce4cd437be8fd957d1be3d106668b8a1b990?s=96&d=mm&r=g\",\"caption\":\"Onur ARDAHANLI\"},\"url\":\"https:\/\/dbtut.com\/index.php\/author\/onurardahanli\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Import Data - Database Tutorials","description":"In today's article, I will cover you about the Data Import process and the parts to pay attention to in this process.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/","og_locale":"en_US","og_type":"article","og_title":"Import Data - Database Tutorials","og_description":"In today's article, I will cover you about the Data Import process and the parts to pay attention to in this process.","og_url":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/","og_site_name":"Database Tutorials","article_published_time":"2023-03-29T10:50:23+00:00","article_modified_time":"2023-03-29T10:58:17+00:00","og_image":[{"width":712,"height":311,"url":"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png","type":"image\/png"}],"author":"Onur ARDAHANLI","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Onur ARDAHANLI","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#article","isPartOf":{"@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/"},"author":{"name":"Onur ARDAHANLI","@id":"https:\/\/dbtut.com\/#\/schema\/person\/7fcd466cd0d347ec64aaa48f18f780c6"},"headline":"Import Data","datePublished":"2023-03-29T10:50:23+00:00","dateModified":"2023-03-29T10:58:17+00:00","mainEntityOfPage":{"@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/"},"wordCount":471,"commentCount":0,"publisher":{"@id":"https:\/\/dbtut.com\/#organization"},"image":{"@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#primaryimage"},"thumbnailUrl":"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png","articleSection":["ORACLE"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/","url":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/","name":"Import Data - Database Tutorials","isPartOf":{"@id":"https:\/\/dbtut.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#primaryimage"},"image":{"@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#primaryimage"},"thumbnailUrl":"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png","datePublished":"2023-03-29T10:50:23+00:00","dateModified":"2023-03-29T10:58:17+00:00","description":"In today's article, I will cover you about the Data Import process and the parts to pay attention to in this process.","breadcrumb":{"@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#primaryimage","url":"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png","contentUrl":"https:\/\/dbtut.com\/wp-content\/uploads\/2023\/03\/Ekran-goruntusu-2023-03-29-134924.png","width":712,"height":311},{"@type":"BreadcrumbList","@id":"https:\/\/dbtut.com\/index.php\/2023\/03\/29\/import-data\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dbtut.com\/"},{"@type":"ListItem","position":2,"name":"Import Data"}]},{"@type":"WebSite","@id":"https:\/\/dbtut.com\/#website","url":"https:\/\/dbtut.com\/","name":"Database Tutorials","description":"MSSQL, Oracle, PostgreSQL, MySQL, MariaDB, DB2, Sybase, Teradata, Big Data, NOSQL, MongoDB, Couchbase, Cassandra, Windows, Linux","publisher":{"@id":"https:\/\/dbtut.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dbtut.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/dbtut.com\/#organization","name":"dbtut","url":"https:\/\/dbtut.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/dbtut.com\/#\/schema\/logo\/image\/","url":"https:\/\/dbtut.com\/wp-content\/uploads\/2021\/02\/dbtutlogo.jpg","contentUrl":"https:\/\/dbtut.com\/wp-content\/uploads\/2021\/02\/dbtutlogo.jpg","width":223,"height":36,"caption":"dbtut"},"image":{"@id":"https:\/\/dbtut.com\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/dbtut.com\/#\/schema\/person\/7fcd466cd0d347ec64aaa48f18f780c6","name":"Onur ARDAHANLI","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/dbtut.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/ecd20c3e1374ced4e1aefc82101cce4cd437be8fd957d1be3d106668b8a1b990?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ecd20c3e1374ced4e1aefc82101cce4cd437be8fd957d1be3d106668b8a1b990?s=96&d=mm&r=g","caption":"Onur ARDAHANLI"},"url":"https:\/\/dbtut.com\/index.php\/author\/onurardahanli\/"}]}},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/posts\/54247","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/users\/484"}],"replies":[{"embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/comments?post=54247"}],"version-history":[{"count":1,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/posts\/54247\/revisions"}],"predecessor-version":[{"id":54263,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/posts\/54247\/revisions\/54263"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/media\/54262"}],"wp:attachment":[{"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/media?parent=54247"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/categories?post=54247"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/tags?post=54247"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}