Hello All,

Hope you all had a very great Christmas and awaiting to kick off a new year soon. Before we kick off, a happy new year to all of you in advance..!!

Just when i was waking up to welcome a sunny Saturday, i had a thought mugging around me and telling me to do this. Yes to write a post on what i had been badly attacked by, it was indeed not something of a physical attack 🙂

So without further dawdling let me explain how hard i had to fight against the Target Update Override which helps me customize my Update strategy used in Informatica allowing us to Update any target table despite not having the right Primary Key combination for the Update. I have done this before and have done it with a blunder every time which taught me something right but time and again i have done the same repetitively. I just then had thought to help other fellow developers avoid this and so i am writing this blog post to help you out with the Target Update Override usage.

Probably when you go for an Update over your target, you would use the Update Strategy which would link to your target ports, the Primary key columns on which Informatica would perform an update on the rest of the columns you will be linking with the target. So there would also be a scenario where you will wish to update few fields with a conditional check over the other fields (non primary key columns).

There are two ways of doing this,

Method 1 –

Make target structure change on your Informatica side to make the Ports/columns on which you will be performing a check to update the row. This means you will have to change these check fields to have a Primary key property enabled on the Informatica end and then use the Update strategy transformation as a usual way.

blog_primary_key_infa_2

Method 2-

To script down a Update sql override in the target. It’s easier to be done with a usual Update query syntax to be written on your target object in the mapping and there is a strict syntax to be followed which also means a very proper spacing to be used between the Operators and the sql literals. By default, the PowerCenter Integration Service updates target tables based on key values. You can override the default UPDATE statement for each target in a target definition. And as discussed we would like to Update on the basis of the Non key columns and this can be done something like below,

Sytax:

UPDATE SALES_TBL SET EMP_NAME = :TU.EMP_NAME DATE_SHIPPED = :TU.DATE_SHIPPED TOTAL_SALES = :TU.TOTAL_SALES WHERE EMP_ID = :TU.EMP_ID

Because the target ports must match the target column names, the update statement includes the keyword:TU to specify the ports in the target transformation. If you modify the UPDATE portion of the statement, use :TU to specify ports.

blog_primary_key_infa I made a mistake..!!

I am just adding this part to save your time. I had done a very silly mistake while composing my Update override query. For a very long time my session was failing and i was unable to figure out what went wrong with the session failing stating i had my Update query wrong.

Revisiting it almost like 100 times changing all that i could and unable to get what went wrong 😦

Finally a big brick hit my head and i just copied my Update sql to the notepad and found that i had no space between WHERE clause and my operand, and i was not able to figure this out as i had it going to a next line character, just as is in below picture.. which i would think was just like a usual word wrap..!!

blog_primary_key_infa_3

Hope this helps you and saves your time. Thanks for reading the post and share as it’s fun to learn and share 🙂

Advertisements

This was something i had faced couple of times in my development adding a logic to my mapping and i have always done the same mistake. Now this is why i have decided to share this with other developers, avoid using the ‘!='(Not Equal) operator in your expression as much as you can.

Let me quickly put down the real issue i had faced with my experience getting to use ‘!=’ operator and this was some real interesting learning experience getting to understand what and how really your data is being processed by the Informatica’s Integration Service.

I had a requirement, where in i had to populate 2 targets satisfying 2 different logic based on a column. So as an illustration i am taking the below records,

SID_No  Group_Nm Amount
1 A    1000
2 B  1500
3 A  2000
4 NULL 1200
5 NULL 1000
6 C 2000
7 C 3000

Now my need for the day is to populate to 2 targets as,

Target_A -> where Group_Nm=’A’
Target_B -> where Group_Nm !=’A’ and Group_Nm !=’B’ (expecting all other records other than those in Target_A and are not
‘B’)

So as the above logic says, i had straight away gone to put down my expressions as below, which would take all Group-A as Flag=1 and remaining records with Group name != ‘B’ would be expected to go to Flag=2.

IIF(Group_Nm = ‘A’,1,
IIF(Group_Nm != ‘B’,2,0))

I am expecting
1,3 to be tagged as Flag=1
4,5,6,7 to be tagged as Flag=2

Now what have i got in my target is,
1,3 tagged as Flag=1 flows to Target_A
6,7 tagged as Flag=2 flows to Target_B.

So it was quite clear that the issue was with the records having Group_Nm as NULL. That clearly gave a clue that the expression was not handling the NULL values as expected. So the root caus ebeing the second IF condition checking for Group_Nm != ‘B’ and hence scanning the left out records 2,4,5,6,7 it had got 6,7 satisfying the condition so what really happened to 4,5 was ‘as it happened to have NULL values these were ignored for a check and so were dropped out’ and so i had them flagged as ‘0’.

So quickly understanding all i had a change in the expression was to keep it as below,

IIF(Group_Nm = ‘A’,1,
IIF(Group_Nm = ‘B’,0,2))

So finally i had my results as expected with the right records flowing to the respective targets. Hope this made so good understanding to you. Please drop in your questions or suggestions if any.

This was something of a unexpected bug found in our code dealing with a mapping variable and i was unable to figure out why was the variable value not taken up as expected.

The issue was when i was trying to pass a value from a port to the Variable and use the same in the Workflow. So while i was checking at the Session level, the value was different whereas the same Port value flowing to the target was looking correct.

With some repeated run and test, i was able to catch what had really went wrong. Here i used a mapping variable defined and i had my only option to set the Aggregation option as either ‘MAX’ or ‘MIN’. So went on to select ‘MAX’.

Then comes assigning the port value to the variable with SETVARIABLE(). In doing so i was assuming the value on every run shall just be assigned to the Variable here. I was completely wrong with this assumption.

As referred in the Informatica content-Help,

 “At the end of a successful session, the PowerCenter Integration Service compares the final current value of the variable to the start value of the variable. Based on the aggregate type of the variable, it saves a final current value to the repository. Unless overridden, it uses the saved value as the initial value of the variable for the next session run.”

Breaking this down for an easy understanding – The Variable $$Var will first be assigned the default value, then on the successful run when we try setting a new value say here default value was $$Var=10 and i am having a new value as ’15’ and my aggregation was type ‘MAX’ and so compares the new value with old and finds the new is greater value so $$Var=15 is assigned.

And now this value $$Var shall not change or be over written until i get a value >15. So was my issue where the new values were not assigned as expected with SETVARIABLE().

Thus my solution was to set a minimum value before the session starts, as simple as it is to just make it a override value through Parameter file (.prm file) and so i set it as ‘$$Var=0’. Now my new value gets always the greater value and is assigned to the variable with the latest 🙂

Hope this makes a good understanding, please put down your questions and comments if any. Thanks!!

Hello friends..

I have been trying to get on with this article for a long time, but have been moving around with other commitments. Finally time to share a interesting topic with all Informatica nerds.

Not so functional or a logical thing i am here with to share with you guys, it was just about how i made use of Sorter and never meant to sort things!

Confused?? OK without much ado lets get on with this..

I was in a need to compare 2 results from same source (SQL Table) and then do an update on the same, kind of a data clean up activity i was involved into. And so i was just done with my mapping design and then just went on to run the session as well. But to my surprise, all i see was the queries, 2 source tables was competing with each other and also there was an insert/update on the target end leading to lot of pressure onto the DB and hence resulting in the process to hang with data load literally halting.

Using_Sorter_Not_For_Sorting

Then a bulb glows (a bit late though).. and the thought was, why cant i tap the data at one point and once it reaches, then to open the tap. But how?

As we all know we have ‘The Integration Service passes all incoming data into the Sorter transformation before it performs the sort operation.’ I just made use of this feature, and as you all might have got it by now, i used 2 sorters at the 2 source flows coming in and joining and then once the source is read with different SQ SQL overrides, all the transformations are performed and finally the data is written on to the same SQL table.

Hope this gets as a Savior to you too at a similar point when you are helpless!!

Share and help others learn with a joy 🙂

This was something i got to learn from my team mate. A very good and a simple way of making your work much easier.

So as the requirement said, create a SQL table and load the data from a MS Excel file. Say there were almost 50 columns and so i had a tiring and huge manual work of listing down the column names and then write a SQL script to create the required SQL table. Though i was sure, that once the table is ready i can very well make the Excel file to a tab delimited text file and later load the whole data through a mapping with source as the flat file(text file) and the target as my SQL table that i will be creating in SQL.

Not a big challenge, but needs a smart solution to handle the table creation having plenty of columns to be created. Here we have a better way to do this for the Informtica developers.

All we have to do is, just import the Flat file source into the informatica source analyzer and just drag drop the same Source object into our target designer,i mean copy the source as a target. Now we have a target created same as the source. And here is all we need to act smart, change the target Database Type to ‘Microsoft SQL Server’ and so we have made our Flat file target into a SQL server target table.

Now comes the trick of making things easier, just get on to Target menu -> Generate/Execute SQL.. and we have the wizard window to generate the SQL script for us. Since we need a Create table script, just have your options selected as is shown below.

And just a click on the Generate SQL File shall get the required script of creating a table having any number of columns. So here we are ready with a create table script to make our job easier. All we have to do here on is to get this running on the SQL Server Management studio to get the table created. And as all we know to populate this is pretty simple. We have our flat file source available and also the target. A mapping to load from source to target, execute it and we have the table created with the data populated.

Isn’t that simple!!?? Share and comment 🙂

I had the whole stage set for my SQL Server 2008 R2 installed and ready to use. But this did not get a relief, there was something that kept be pulling my hair for almost 2 full week ends after doing all the installation L

I was in a struggle in getting my SQL server on track to connect to the database engine!!

It was all saying – “error: 40 – could not open a connection to SQL Server”.

 

All I had to do was to look for the SQL server services, Right click on your MyComputer(Computer) -> Manage -> Computer manager. And here we go to the Services and Applications and then to SQL Server Configuration Manager to find the below settings under SQL Server Services,

 

So this is the root cause making us not to get logged into the Database Engine, and a simple solution have this set is to right click and look for the properties

 

Then all you have to do is to start the services and give your database connection a one more try, and in case you find any problem with the login user simply give the User Name as ‘sa’ and a blank password with the SQL Server Authentication. This must resolve things and get you logged in. Finally my wait gets over after almost 2 full weeks 🙂

Thanks for reading, share this and it must help some one out.

 

How to enable a select copy of headers?

Initially i was struggling to get this option done, and found it hard to type the headers manually where ever i copied my query results. So here i have the solution for you.

Its a simple flow of steps to follow to enable this to work. As usual on your SQL server just follow the below steps and you get your headers from the query result to be copied into your clipboard for a paste to any where.

Select the Tools menu on the top of your window. Then select the Options from tools menu.

Now appears the Options window, where on the left we have the menu Query Results -> SQL Server -> Results to Grid.

Click the ‘Results to Grid’ option and on right pane of the Options window we have the option ‘Include column headers when copying or saving the results’.
Just check this option and we are done, to enable the copying of headers along with the query result datas.

Thanks for reading tis post. Feel free to share your comments and views on the same. Have fun learning 🙂

%d bloggers like this: