{"id":2264,"date":"2018-08-27T13:19:40","date_gmt":"2018-08-27T13:19:40","guid":{"rendered":"http:\/\/dbtut.com\/?p=2264"},"modified":"2018-11-18T21:05:25","modified_gmt":"2018-11-18T21:05:25","slug":"teradata-query-grid-connection-with-different-systems-database-nosql-hadoop","status":"publish","type":"post","link":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/","title":{"rendered":"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop )"},"content":{"rendered":"<p>&nbsp;<\/p>\n<p>Teradata 15.0 has come up with many new exciting features and enhanced capabilities \u00a0.\u00a0Teradata Query Grid is one of them.<\/p>\n<figure id=\"attachment_2266\" aria-describedby=\"caption-attachment-2266\" style=\"width: 320px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-2266 size-full\" src=\"http:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/blog3.jpg.png\" alt=\"\" width=\"320\" height=\"206\" \/><figcaption id=\"caption-attachment-2266\" class=\"wp-caption-text\">Connector Teradata QueryGrid<\/figcaption><\/figure>\n<p>Teradata database now able to connect Hadoop with this Query Grid so it\u2019s called as Teradata Database-to-Hadoop also referred as\u00a0\u00a0Teradata-to-Hadoop connector.<\/p>\n<p><strong>Key Importance of Teradata Query Grid is to Put Data in the Data Lake FAST across foreign servers.<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>What is Query Grid?<\/strong><\/p>\n<p>Query Grid works to connect a Teradata and Hadoop system to massive scale, with no effort, and at speeds of 10TB\/second.<\/p>\n<ul>\n<li>It provides a SQL interface for transferring data between Teradata Database and remote Hadoop hosts.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<ul>\n<li>Import Hadoop data into a temporary or permanent Teradata table.<\/li>\n<li>Export data from temporary or permanent Teradata tables into existing Hadoop tables.<\/li>\n<li>Create or drop tables in Hadoop from Teradata Database.<\/li>\n<li>Reference tables on the remote hosts in SELECT and INSERT statements.<\/li>\n<li>Select Hadoop data for use with a business tool.<\/li>\n<li>Select and join Hadoop data with data from independent data warehouses for analytical use.<\/li>\n<li>Leverage Hadoop resources, Reduce data movement<\/li>\n<li>Bi-directional to Hadoop<\/li>\n<li>Query push-down<\/li>\n<li>Easy configuration of server connections<\/li>\n<\/ul>\n<p><strong>What could be the Process Flow ?<\/strong><\/p>\n<ul>\n<li>Query through Teradata<\/li>\n<li>Sent to Hadoop through Hive<\/li>\n<li>Results returned to Teradata<\/li>\n<li>Additional processing joins data in Teradata<\/li>\n<li>Final results sent back to application\/user<\/li>\n<\/ul>\n<p>With QueryGrid, We can add a clause in a SQL statement that says<\/p>\n<p><strong>\u201cCall up Hadoop, pass Hive a SQL request, receive the Hive results, and join it to the data warehouse tables.\u201d<\/strong><\/p>\n<p>Running a single SQL statement spanning Hadoop and Teradata is amazing in itself a big deal within self. Notice too that all the database security, advanced SQL functions, and system management in the Teradata system is supporting these queries. The only effort required is for the database administrator to set up a \u201cview\u201d that connects the systems.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<ul>\n<li>QueryGrid<\/li>\n<\/ul>\n<ol>\n<li><strong>Server grammar<\/strong><\/li>\n<li>Simplify via \u201cserver name\u201d<\/li>\n<li><strong>Hadoop import operator<\/strong><\/li>\n<li>Load_from_hcatalog<\/li>\n<li>Added server grammar<\/li>\n<li><strong>Hadoop export operator (new)<\/strong><\/li>\n<li>Load_to_hcatalog<\/li>\n<li>Supports files:<\/li>\n<li>Delimited Text, JSON, RCFile<\/li>\n<li>Sequence File, ORCfile, Avro<\/li>\n<\/ol>\n<ul>\n<li>Query push-down<\/li>\n<li>Bi-directional data transfer<\/li>\n<li>Provide access rights<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2267\" src=\"http:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/blog1.png\" alt=\"\" width=\"320\" height=\"319\" \/><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2268\" src=\"http:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/blog2.png\" alt=\"\" width=\"300\" height=\"173\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Parallel Performance:<\/strong><br \/>\nFrom many years, data virtualization tools lack the ability to move data between systems in parallel. Such tools send a request to a remote database and the data comes back serially through an Ethernet wire. Teradata Query Grid is built to link to remote systems in parallel and interchange data through many network connections at once.<\/p>\n<p>If we want to move a terabyte per minute? With the right configurations it can be done. Parallel processing by both systems makes this extremely fast. I know of no data virtualization system that does this today.<\/p>\n<p>&nbsp;<\/p>\n<p>Without doubt, the Hadoop cluster will have a different number of servers compared to the Teradata or any MPP systems. The Teradata systems start the parallel data exchange by matching up units of parallelism between the two systems. That is, all the Teradata parallel workers (called AMPs) connect to a buddy Hadoop worker node for maximum throughput. Anytime the configuration changes, the workers match-up changes.<\/p>\n<p>But Teradata Query Grid does it all for us completely invisible to the user.<\/p>\n<p><strong>Query Grid Teradata to Hadoop Server Configuration:<\/strong><\/p>\n<p><strong>CREATE<\/strong>\u00a0\u00a0\u00a0\u00a0<strong>FOREIGN SERVER<\/strong>\u00a0<strong>Hadoop_sysd_xav<\/strong>\u00a0<strong>USING<\/strong>\u00a0HOSTTYPE(&#8216;hadoop&#8217;)SERVER\u00a0(&#8216;sysd.labs.teradata.com&#8217;)\u00a0PORT\u00a0(&#8216;9083&#8217;)\u00a0HIVESERVER(&#8216;sysd.labs.teradata.com&#8217;)\u00a0HIVEPORT\u00a0(&#8216;10000&#8217;)\u00a0USERNAME(&#8216;Hive&#8217;)DEFAULT_STRING_SIZE(&#8216;2048&#8217;)HADOOP_PROPERTIES(&#8216;org.apache.hadoop.io.compress.GzipCodec&#8217;)<strong>;<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>DO<\/strong>\u00a0<strong>IMPORT<\/strong>\u00a0<strong>WITH<\/strong>\u00a0\u00a0\u00a0\u00a0syslib.load_from_hcatalog_hdp1_3_2,<\/p>\n<p><strong>DO<\/strong>\u00a0<strong>EXPORT<\/strong>\u00a0<strong>WITH<\/strong>\u00a0\u00a0\u00a0\u00a0syslib.load_to_hcatalog_hdp1_3_2Merge_hdfs_files(&#8216;True&#8217;)Compression_codec(&#8216;org.apache.hadoop.io.compress.GzipCodec&#8217;<strong>;<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Server\u00a0<strong>name<\/strong>\u00a0=<strong>\u00a0Hadoop_sysd_xav<\/strong><\/p>\n<p><strong>Table Name = xav_hdp_tbl@Hadoop_sysd_xav<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>SELECT<\/strong>\u00a0\u00a0\u00a0\u00a0<strong>source<\/strong>,\u00a0<strong>session<\/strong><\/p>\n<p><strong>FROM<\/strong>\u00a0\u00a0\u00a0\u00a0<strong>xav_hdp_tbl@<\/strong><strong>Hadoop_sysd_xav<\/strong><\/p>\n<p><strong>WHERE<\/strong>\u00a0\u00a0\u00a0\u00a0session_ts\u00a0=\u00a0&#8216;2017-01-01&#8217;<strong>;<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>QueryGrid Server Objects and Privileges:<\/strong><\/p>\n<p>1)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0TD_SERVER_DB contains all servers objects<\/p>\n<p>2)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Servers are global objects<\/p>\n<p>3)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Users have SELECT and INSERT granted to them<\/p>\n<ol>\n<li>a)\u00a0\u00a0\u00a0 <strong>GRANT<\/strong> <strong>SELECT<\/strong> <strong>ON<\/strong> hdp132_svr <strong>TO<\/strong> Pankaj<strong>;<\/strong><\/li>\n<li>b)\u00a0\u00a0\u00a0 <strong>GRANT<\/strong> <strong>INSERT<\/strong> <strong>ON<\/strong> hdp143_svr <strong>TO<\/strong> Abid<strong>;<\/strong><\/li>\n<\/ol>\n<p>4)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Being able to create and drop a server is a privilege<\/p>\n<ol>\n<li>a)\u00a0\u00a0\u00a0 <strong>GRANT<\/strong> <strong>CREATE<\/strong> SERVER<\/li>\n<li>b)\u00a0\u00a0\u00a0 <strong>GRANT<\/strong> <strong>DROP<\/strong> SERVER<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<p><strong>Remote SQL Execution :<\/strong><\/p>\n<p>1)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Push SQL to remote Hive system<\/p>\n<p>2)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Hive filters data on non-partitioned columns<\/p>\n<p>3)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Foreign table \u2018Select\u2019 executed on remote system<\/p>\n<p>&nbsp;<\/p>\n<p><strong>SELECT<\/strong>\u00a0\u00a0\u00a0 <strong>source<\/strong>, <strong>session<\/strong><\/p>\n<p><strong>FROM<\/strong>\u00a0\u00a0\u00a0 <strong>FOREIGN<\/strong> <strong>TABLE<\/strong>(<\/p>\n<p><strong>select<\/strong>\u00a0\u00a0\u00a0 <strong>session<\/strong>, <strong>source<\/strong><\/p>\n<p><strong>from<\/strong>\u00a0\u00a0\u00a0 xav_hdp_tbl<\/p>\n<p><strong>where<\/strong>\u00a0\u00a0\u00a0 <strong>source<\/strong> = &#8220;Mozilla&#8221; )@Hadoop_sysd_xav<\/p>\n<p><strong>WHERE<\/strong>\u00a0\u00a0\u00a0 <strong>session<\/strong> = current_date <strong>AS<\/strong> dt<strong>;<\/strong><\/p>\n<p><strong>QueryGrid Data Transfer:<\/strong><\/p>\n<p><strong>Import<\/strong><\/p>\n<p><strong>SELECT<\/strong>\u00a0\u00a0\u00a0 <strong>source<\/strong>, <strong>session<\/strong><\/p>\n<p><strong>FROM<\/strong>\u00a0\u00a0\u00a0 xav_hdp_tbl@Hadoop_sysd_xav<\/p>\n<p><strong>WHERE<\/strong>\u00a0\u00a0\u00a0 session_ts = &#8216;2017-01-01&#8217;<strong>;<\/strong> &#8220;insert\/select&#8221; &amp; &#8220;create table as&#8221; <strong>to<\/strong> instantiate <strong>data<\/strong> locally.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Export<\/strong><\/p>\n<p><strong>INSERT<\/strong> <strong>INTO<\/strong> emp_xav@Hadoop_sysd_xav<\/p>\n<p><strong>SELECT<\/strong>\u00a0\u00a0\u00a0 emp_xav_id, emp_xav_zip<\/p>\n<p><strong>FROM<\/strong>\u00a0\u00a0\u00a0 emp_xav_data<\/p>\n<p><strong>WHERE<\/strong>\u00a0\u00a0\u00a0 last_update = current_date<strong>;<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>QueryGrid Insert Explained :<\/strong><\/p>\n<p><strong>EXPLAIN<\/strong> <strong>INSERT<\/strong> <strong>INTO<\/strong> xav_data@hdp132_svr <strong>SELECT<\/strong> * <strong>FROM<\/strong> newcars<strong>;<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>***Success: Activity Count = 41 Explanation <em>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212; <\/em><\/p>\n<p>1) <strong>First<\/strong>, we <strong>lock<\/strong> a <strong>distinct<\/strong> ut1.&#8221;pseudo table&#8221; <strong>for<\/strong> <strong>read<\/strong> <strong>on<\/strong> a RowHash <strong>to<\/strong> prevent <strong>global<\/strong> deadlock <strong>for<\/strong> ut1.tab1.<\/p>\n<p>2) <strong>Next<\/strong>, we <strong>lock<\/strong> ut1.tab1 <strong>for<\/strong> <strong>read<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<p>3) We <strong>do<\/strong> an <strong>all<\/strong>-AMPs <strong>RETRIEVE<\/strong> step <strong>from<\/strong> ut1.newcars <strong>by<\/strong> way <strong>of<\/strong> an <strong>all<\/strong>&#8211;<strong>rows<\/strong> scan <strong>with<\/strong> <strong>no<\/strong> residual conditions executing <strong>table<\/strong> <strong>operator<\/strong> SYSLIB.load_to_hcatalog <strong>with<\/strong> a <strong>condition<\/strong> <strong>of<\/strong> (&#8220;(1=1)&#8221;) <strong>into<\/strong> <strong>Spool<\/strong> 2 (used <strong>to<\/strong> materialize <strong>view<\/strong>, derived <strong>table<\/strong>, <strong>table<\/strong> <strong>function<\/strong> <strong>or<\/strong> <strong>table<\/strong> <strong>operator<\/strong> drvtab_inner) (all_amps), which <strong>is<\/strong> built locally <strong>on<\/strong> the AMPs. The <strong>size<\/strong> <strong>of<\/strong> <strong>Spool<\/strong> 2 <strong>is<\/strong> estimated <strong>with<\/strong> low confidence <strong>to<\/strong> be 8 <strong>rows<\/strong> (11,104 <strong>bytes<\/strong>). The estimated time <strong>for<\/strong> this step <strong>is<\/strong> 0.16 seconds.<\/p>\n<p>4) We <strong>do<\/strong> an <strong>all<\/strong>-AMPs <strong>RETRIEVE<\/strong> step <strong>from<\/strong> <strong>Spool<\/strong> 2 (<strong>Last<\/strong> <strong>Use<\/strong>) <strong>by<\/strong> way <strong>of<\/strong> an <strong>all<\/strong>&#8211;<strong>rows<\/strong> scan <strong>into<\/strong> <strong>Spool<\/strong> 3 (used <strong>to<\/strong> materialize <strong>view<\/strong>, derived <strong>table<\/strong>, <strong>table<\/strong> <strong>function<\/strong> <strong>or<\/strong> <strong>table<\/strong> <strong>operator<\/strong> TblOpInputSpool) (all_amps), which <strong>is<\/strong> redistributed <strong>by<\/strong> <strong>hash<\/strong> code <strong>to<\/strong> <strong>all<\/strong> AMPs. The <strong>size<\/strong> <strong>of<\/strong> <strong>Spool<\/strong> 3 <strong>is<\/strong> estimated <strong>with<\/strong> low confidence <strong>to<\/strong> be 8 <strong>rows<\/strong> ( 11,104 <strong>bytes<\/strong>). The estimated time <strong>for<\/strong> this step <strong>is<\/strong> 0.16 seconds.<\/p>\n<p>5) We <strong>do<\/strong> an <strong>all<\/strong>-AMPs <strong>RETRIEVE<\/strong> step <strong>from<\/strong> <strong>Spool<\/strong> 3 (<strong>Last<\/strong> <strong>Use<\/strong>) <strong>by<\/strong> way <strong>of<\/strong> an <strong>all<\/strong>&#8211;<strong>rows<\/strong> scan executing <strong>table<\/strong> <strong>operator<\/strong> SYSLIB.load_to_hcatalog <strong>with<\/strong> a <strong>condition<\/strong> <strong>of<\/strong> (&#8220;(1=1)&#8221;) <strong>into<\/strong> <strong>Spool<\/strong> 4 (used <strong>to<\/strong> materialize <strong>view<\/strong>, derived <strong>table<\/strong>, <strong>table<\/strong> <strong>function<\/strong> <strong>or<\/strong> <strong>table<\/strong> <strong>operator<\/strong> h4) (all_amps), which <strong>is<\/strong> built locally <strong>on<\/strong> the AMPs.<\/p>\n<p>&lt; <strong>BEGIN<\/strong> <strong>EXPLAIN<\/strong> <strong>FOR<\/strong> REMOTE <strong>QUERY<\/strong> <em>&#8211;&gt; TD: 3 column(s); Hadoop: 3 column(s), with 2 partition column(s); doors(INTEGER) -&gt; doors(STRING); make(VARCHAR) -&gt; make*(STRING); model(VARCHAR) -&gt; model*(STRING); * denotes partition column; <\/em><\/p>\n<p>&lt;<em>&#8212; END EXPLAIN FOR REMOTE QUERY &gt; The size of Spool 4 is estimated with low confidence to be 8 rows (200 bytes). The estimated time for this step is 0.16 seconds.<\/em><\/p>\n<p><em>\u00a0<\/em><\/p>\n<p><em>\u00a0<\/em><\/p>\n<p><strong>Create and Drop Hadoop Tables:<\/strong><\/p>\n<p>1)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Stored procedures to create and drop Hadoop tables<\/p>\n<p>2)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Allows SQL scripts to export data in stand alone fashion<\/p>\n<p>&nbsp;<\/p>\n<p><strong>CALL<\/strong> SYSLIB.HDROP(&#8216;t3&#8242;,&#8217;hdp132_svr&#8217;)<strong>;<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>CALL<\/strong> SYSLIB.HCTAS(&#8216;t3&#8242;,&#8217;c2,c3&#8217;,&#8217;LOCATION &#8220;\/user\/hive\/table_t12&#8243;&#8216;,&#8217;hdp132_svr&#8217;,&#8217;default&#8217;)<strong>;<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Connection Flow:<\/strong><\/p>\n<ul>\n<li>The client connects to the system through the PE in node 1. The query is parsed in the PE. During the parsing phase, the table operator\u2019s contract function contacts the HCatalog component through the External Access Handler (EAH), which is a one-per-node Java Virtual Machine Connection Flow<\/li>\n<li>The HCatalog returns the metadata about the table, the number of columns, and the types for the columns. The parser uses this info and also uses this connection d to obtain the Hadoop splits of data that underlie the Hadoop table.<\/li>\n<li>The splits are assigned to the AMPs in a round-robin fashion so that each AMP gets a split.<\/li>\n<li>The parser phase completes and produces an AMP step containing the table operator. This is sent to all the AMPs in parallel.<\/li>\n<li>Each AMP then begins to execute the table operator\u2019s execute function providing a parallel import of Hadoop data.<\/li>\n<li>The execute function opens and reads the split data reading in Hadoop rows. These are converted to Teradata data types in each column, and the rows are written to spool.<\/li>\n<li>When all the data has been written, the spool file is redistributed as input into the next part of the query plan.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Performance\/Speed:<\/strong><br \/>\nImagine complex systematic analytics using R or SAS are run inside the Teradata data warehouse as part of a merger and acquisition project. In this case, we want to pass this data to the Hadoop Data Lake where it is combined with temporary data from the company being acquired. With reasonably simple SQL stuffed in a database view, the answers calculated by the Teradata Database can be sent to Hadoop to help finish up some reports. Bi-directional data exchange is another breakthrough in the Teradata Query Grid, new in release 15.0. The common thread in all these innovations is that the data moves from the memory of one system to the memory of the other. No extracts, no landing the data on disk until the final processing step \u2013 and sometimes not even then.<\/p>\n<p><strong>What is\u00a0<\/strong><strong>Push-down Processing<\/strong><strong>:<\/strong><br \/>\nTo minimize data movement, Teradata Query Grid sends the remote system SQL filters that eliminate records and columns that aren\u2019t needed.<\/p>\n<p>This way, the Hadoop system discards unnecessary data so it doesn\u2019t flood the network with data that will be thrown away. After all the processing is done in Hadoop, data is joined in the data warehouse, summarized, and delivered to the user\u2019s favorite business intelligence tool.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Business Benefits:<\/strong><\/p>\n<p>&nbsp;<\/p>\n<ul>\n<li>No hassle analytics with a seamless data fabric across all of our data and analytical engines<\/li>\n<li>Get the most out of your data by taking advantage of specialized processing engines operating as a cohesive analytic environment<\/li>\n<li>Transparently harness the combined power of multiple analytic engines to address a business question<\/li>\n<li>Self-service data and analytics across all systems through SQL<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>IT Benefits<\/strong><strong>\u00a0<\/strong><\/p>\n<ul>\n<li>Automate and optimize use of your analytic systems through \u201cpush-down\u201d processing across platforms<\/li>\n<li>Minimize data movement and process data where it resides<\/li>\n<li>Minimize data duplication<\/li>\n<li>Transparently automate analytic processing and data movement between systems<\/li>\n<li>Enable easy bi-directional data movement<\/li>\n<li>Integrated processing without administrative challenges<\/li>\n<li>Leverage the analytic power and value of your Teradata Database, Teradata Aster Database, open-source Presto and Hive for Hadoop, Oracle Database, and powerful languages such as SAS, Perl, Python, Ruby, and R.<\/li>\n<li>High performance query plans using data from other sources while using systems within the Teradata Unified Data Architecture such as passing workload priorities makes the best use of available resources<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Requirements for Query Grid to Hadoop:<\/strong><\/p>\n<p>1)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Teradata 15.0 +<\/p>\n<p>2)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Node memory &gt; 96GB<\/p>\n<ol>\n<li>a)Network &gt; All Teradata nodes able to connect to all Hadoop data nodes<\/li>\n<li>b)Proxy user on Hadoop<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/in.linkedin.com\/in\/pxavient\">Pankaj Chahar<\/a><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0<\/strong><\/p>\n<p><em><br \/>\nReferences:<\/em><br \/>\n<em>http:\/\/www.teradata.com<\/em><br \/>\n<a href=\"https:\/\/en.wikipedia.org\/\"><em>https:\/\/en.wikipedia.org<\/em><\/a><\/p>\n<p><em>http:\/\/in.teradata.com\/products-and-services\/query-grid\/?LangType=16393&amp;LangSelect=true<\/em><\/p>\n<p>&nbsp;<\/p>\n<p>http:\/\/www.info.teradata.com\/download.cfm?ItemID=1001944<\/p>\n\n<div class=\"pvc_clear\"><\/div>\n<p id=\"pvc_stats_2264\" class=\"pvc_stats all  \" data-element-id=\"2264\" style=\"\"><i class=\"pvc-stats-icon medium\" aria-hidden=\"true\"><svg aria-hidden=\"true\" focusable=\"false\" data-prefix=\"far\" data-icon=\"chart-bar\" role=\"img\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\" class=\"svg-inline--fa fa-chart-bar fa-w-16 fa-2x\"><path fill=\"currentColor\" d=\"M396.8 352h22.4c6.4 0 12.8-6.4 12.8-12.8V108.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v230.4c0 6.4 6.4 12.8 12.8 12.8zm-192 0h22.4c6.4 0 12.8-6.4 12.8-12.8V140.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v198.4c0 6.4 6.4 12.8 12.8 12.8zm96 0h22.4c6.4 0 12.8-6.4 12.8-12.8V204.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v134.4c0 6.4 6.4 12.8 12.8 12.8zM496 400H48V80c0-8.84-7.16-16-16-16H16C7.16 64 0 71.16 0 80v336c0 17.67 14.33 32 32 32h464c8.84 0 16-7.16 16-16v-16c0-8.84-7.16-16-16-16zm-387.2-48h22.4c6.4 0 12.8-6.4 12.8-12.8v-70.4c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v70.4c0 6.4 6.4 12.8 12.8 12.8z\" class=\"\"><\/path><\/svg><\/i> <img loading=\"lazy\" decoding=\"async\" width=\"16\" height=\"16\" alt=\"Loading\" src=\"https:\/\/dbtut.com\/wp-content\/plugins\/page-views-count\/ajax-loader-2x.gif\" border=0 \/><\/p>\n<div class=\"pvc_clear\"><\/div>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; Teradata 15.0 has come up with many new exciting features and enhanced capabilities \u00a0.\u00a0Teradata Query Grid is one of them. Teradata database now able to connect Hadoop with this Query Grid so it\u2019s called as Teradata Database-to-Hadoop also referred as\u00a0\u00a0Teradata-to-Hadoop connector. Key Importance of Teradata Query Grid is to Put Data in the Data &hellip;<\/p>\n<div class=\"pvc_clear\"><\/div>\n<p id=\"pvc_stats_2264\" class=\"pvc_stats all  \" data-element-id=\"2264\" style=\"\"><i class=\"pvc-stats-icon medium\" aria-hidden=\"true\"><svg aria-hidden=\"true\" focusable=\"false\" data-prefix=\"far\" data-icon=\"chart-bar\" role=\"img\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\" class=\"svg-inline--fa fa-chart-bar fa-w-16 fa-2x\"><path fill=\"currentColor\" d=\"M396.8 352h22.4c6.4 0 12.8-6.4 12.8-12.8V108.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v230.4c0 6.4 6.4 12.8 12.8 12.8zm-192 0h22.4c6.4 0 12.8-6.4 12.8-12.8V140.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v198.4c0 6.4 6.4 12.8 12.8 12.8zm96 0h22.4c6.4 0 12.8-6.4 12.8-12.8V204.8c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v134.4c0 6.4 6.4 12.8 12.8 12.8zM496 400H48V80c0-8.84-7.16-16-16-16H16C7.16 64 0 71.16 0 80v336c0 17.67 14.33 32 32 32h464c8.84 0 16-7.16 16-16v-16c0-8.84-7.16-16-16-16zm-387.2-48h22.4c6.4 0 12.8-6.4 12.8-12.8v-70.4c0-6.4-6.4-12.8-12.8-12.8h-22.4c-6.4 0-12.8 6.4-12.8 12.8v70.4c0 6.4 6.4 12.8 12.8 12.8z\" class=\"\"><\/path><\/svg><\/i> <img loading=\"lazy\" decoding=\"async\" width=\"16\" height=\"16\" alt=\"Loading\" src=\"https:\/\/dbtut.com\/wp-content\/plugins\/page-views-count\/ajax-loader-2x.gif\" border=0 \/><\/p>\n<div class=\"pvc_clear\"><\/div>\n","protected":false},"author":72,"featured_media":1395,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[1307],"tags":[1460,1461,1445,1355,1459,1354],"class_list":["post-2264","post","type-post","status-publish","format-standard","has-post-thumbnail","","category-teradata","tag-connector","tag-datalake","tag-hadoop","tag-nosql","tag-querygrid","tag-teradata"],"aioseo_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.9 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop ) - Database Tutorials<\/title>\n<meta name=\"description\" content=\"\u201cCall up Hadoop, pass Hive a SQL request, receive the Hive results, and join it to the data warehouse tables.\u201d\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop ) - Database Tutorials\" \/>\n<meta property=\"og:description\" content=\"\u201cCall up Hadoop, pass Hive a SQL request, receive the Hive results, and join it to the data warehouse tables.\u201d\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/\" \/>\n<meta property=\"og:site_name\" content=\"Database Tutorials\" \/>\n<meta property=\"article:published_time\" content=\"2018-08-27T13:19:40+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2018-11-18T21:05:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png\" \/>\n\t<meta property=\"og:image:width\" content=\"89\" \/>\n\t<meta property=\"og:image:height\" content=\"23\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Pankaj Chahar\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Pankaj Chahar\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/\"},\"author\":{\"name\":\"Pankaj Chahar\",\"@id\":\"https:\/\/dbtut.com\/#\/schema\/person\/e8489438a2fab8a7d16e910b4edcfdb7\"},\"headline\":\"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop )\",\"datePublished\":\"2018-08-27T13:19:40+00:00\",\"dateModified\":\"2018-11-18T21:05:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/\"},\"wordCount\":1819,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/dbtut.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png\",\"keywords\":[\"connector\",\"datalake\",\"hadoop\",\"NoSQL\",\"querygrid\",\"Teradata\"],\"articleSection\":[\"TERADATA\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/\",\"url\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/\",\"name\":\"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop ) - Database Tutorials\",\"isPartOf\":{\"@id\":\"https:\/\/dbtut.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png\",\"datePublished\":\"2018-08-27T13:19:40+00:00\",\"dateModified\":\"2018-11-18T21:05:25+00:00\",\"description\":\"\u201cCall up Hadoop, pass Hive a SQL request, receive the Hive results, and join it to the data warehouse tables.\u201d\",\"breadcrumb\":{\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#primaryimage\",\"url\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png\",\"contentUrl\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png\",\"width\":89,\"height\":23},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/dbtut.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop )\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/dbtut.com\/#website\",\"url\":\"https:\/\/dbtut.com\/\",\"name\":\"Database Tutorials\",\"description\":\"MSSQL, Oracle, PostgreSQL, MySQL, MariaDB, DB2, Sybase, Teradata, Big Data, NOSQL, MongoDB, Couchbase, Cassandra, Windows, Linux\",\"publisher\":{\"@id\":\"https:\/\/dbtut.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/dbtut.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/dbtut.com\/#organization\",\"name\":\"dbtut\",\"url\":\"https:\/\/dbtut.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/dbtut.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2021\/02\/dbtutlogo.jpg\",\"contentUrl\":\"https:\/\/dbtut.com\/wp-content\/uploads\/2021\/02\/dbtutlogo.jpg\",\"width\":223,\"height\":36,\"caption\":\"dbtut\"},\"image\":{\"@id\":\"https:\/\/dbtut.com\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/dbtut.com\/#\/schema\/person\/e8489438a2fab8a7d16e910b4edcfdb7\",\"name\":\"Pankaj Chahar\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/dbtut.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/41a12e5f6ca3afb247a6085b3ed6c1aceec4d42276c392ae5ce8be25038533ea?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/41a12e5f6ca3afb247a6085b3ed6c1aceec4d42276c392ae5ce8be25038533ea?s=96&d=mm&r=g\",\"caption\":\"Pankaj Chahar\"},\"description\":\"I'm an IT Professional from India, active Blogger. Have written many blogs for my previous firms,LinkedIn and own my blog. Professional Skills Primary : Teradata, Netezza, Unix Secondary : AWS Redshift ,PostgreSQL, MongoDB, Cassandra and other AWS services, EC2,S3,Cloudwatch,datapipeline and etc. You may reach me at pankajchahar052@gmail.com\/ +91-8802350184\",\"sameAs\":[\"https:\/\/www.linkedin.com\/in\/pchahar\/\"],\"url\":\"https:\/\/dbtut.com\/index.php\/author\/pankajchahar\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop ) - Database Tutorials","description":"\u201cCall up Hadoop, pass Hive a SQL request, receive the Hive results, and join it to the data warehouse tables.\u201d","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/","og_locale":"en_US","og_type":"article","og_title":"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop ) - Database Tutorials","og_description":"\u201cCall up Hadoop, pass Hive a SQL request, receive the Hive results, and join it to the data warehouse tables.\u201d","og_url":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/","og_site_name":"Database Tutorials","article_published_time":"2018-08-27T13:19:40+00:00","article_modified_time":"2018-11-18T21:05:25+00:00","og_image":[{"width":89,"height":23,"url":"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png","type":"image\/png"}],"author":"Pankaj Chahar","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Pankaj Chahar","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#article","isPartOf":{"@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/"},"author":{"name":"Pankaj Chahar","@id":"https:\/\/dbtut.com\/#\/schema\/person\/e8489438a2fab8a7d16e910b4edcfdb7"},"headline":"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop )","datePublished":"2018-08-27T13:19:40+00:00","dateModified":"2018-11-18T21:05:25+00:00","mainEntityOfPage":{"@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/"},"wordCount":1819,"commentCount":0,"publisher":{"@id":"https:\/\/dbtut.com\/#organization"},"image":{"@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#primaryimage"},"thumbnailUrl":"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png","keywords":["connector","datalake","hadoop","NoSQL","querygrid","Teradata"],"articleSection":["TERADATA"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/","url":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/","name":"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop ) - Database Tutorials","isPartOf":{"@id":"https:\/\/dbtut.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#primaryimage"},"image":{"@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#primaryimage"},"thumbnailUrl":"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png","datePublished":"2018-08-27T13:19:40+00:00","dateModified":"2018-11-18T21:05:25+00:00","description":"\u201cCall up Hadoop, pass Hive a SQL request, receive the Hive results, and join it to the data warehouse tables.\u201d","breadcrumb":{"@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#primaryimage","url":"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png","contentUrl":"https:\/\/dbtut.com\/wp-content\/uploads\/2018\/08\/download2.png","width":89,"height":23},{"@type":"BreadcrumbList","@id":"https:\/\/dbtut.com\/index.php\/2018\/08\/27\/teradata-query-grid-connection-with-different-systems-database-nosql-hadoop\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dbtut.com\/"},{"@type":"ListItem","position":2,"name":"Teradata Query Grid : Connection with different systems ( Database , NoSQL , Hadoop )"}]},{"@type":"WebSite","@id":"https:\/\/dbtut.com\/#website","url":"https:\/\/dbtut.com\/","name":"Database Tutorials","description":"MSSQL, Oracle, PostgreSQL, MySQL, MariaDB, DB2, Sybase, Teradata, Big Data, NOSQL, MongoDB, Couchbase, Cassandra, Windows, Linux","publisher":{"@id":"https:\/\/dbtut.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dbtut.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/dbtut.com\/#organization","name":"dbtut","url":"https:\/\/dbtut.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/dbtut.com\/#\/schema\/logo\/image\/","url":"https:\/\/dbtut.com\/wp-content\/uploads\/2021\/02\/dbtutlogo.jpg","contentUrl":"https:\/\/dbtut.com\/wp-content\/uploads\/2021\/02\/dbtutlogo.jpg","width":223,"height":36,"caption":"dbtut"},"image":{"@id":"https:\/\/dbtut.com\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/dbtut.com\/#\/schema\/person\/e8489438a2fab8a7d16e910b4edcfdb7","name":"Pankaj Chahar","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/dbtut.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/41a12e5f6ca3afb247a6085b3ed6c1aceec4d42276c392ae5ce8be25038533ea?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/41a12e5f6ca3afb247a6085b3ed6c1aceec4d42276c392ae5ce8be25038533ea?s=96&d=mm&r=g","caption":"Pankaj Chahar"},"description":"I'm an IT Professional from India, active Blogger. Have written many blogs for my previous firms,LinkedIn and own my blog. Professional Skills Primary : Teradata, Netezza, Unix Secondary : AWS Redshift ,PostgreSQL, MongoDB, Cassandra and other AWS services, EC2,S3,Cloudwatch,datapipeline and etc. You may reach me at pankajchahar052@gmail.com\/ +91-8802350184","sameAs":["https:\/\/www.linkedin.com\/in\/pchahar\/"],"url":"https:\/\/dbtut.com\/index.php\/author\/pankajchahar\/"}]}},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/posts\/2264","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/users\/72"}],"replies":[{"embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/comments?post=2264"}],"version-history":[{"count":0,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/posts\/2264\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/media\/1395"}],"wp:attachment":[{"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/media?parent=2264"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/categories?post=2264"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dbtut.com\/index.php\/wp-json\/wp\/v2\/tags?post=2264"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}