This was something of a unexpected bug found in our code dealing with a mapping variable and i was unable to figure out why was the variable value not taken up as expected.

The issue was when i was trying to pass a value from a port to the Variable and use the same in the Workflow. So while i was checking at the Session level, the value was different whereas the same Port value flowing to the target was looking correct.

With some repeated run and test, i was able to catch what had really went wrong. Here i used a mapping variable defined and i had my only option to set the Aggregation option as either ‘MAX’ or ‘MIN’. So went on to select ‘MAX’.

Then comes assigning the port value to the variable with SETVARIABLE(). In doing so i was assuming the value on every run shall just be assigned to the Variable here. I was completely wrong with this assumption.

As referred in the Informatica content-Help,

 “At the end of a successful session, the PowerCenter Integration Service compares the final current value of the variable to the start value of the variable. Based on the aggregate type of the variable, it saves a final current value to the repository. Unless overridden, it uses the saved value as the initial value of the variable for the next session run.”

Breaking this down for an easy understanding – The Variable $$Var will first be assigned the default value, then on the successful run when we try setting a new value say here default value was $$Var=10 and i am having a new value as ’15′ and my aggregation was type ‘MAX’ and so compares the new value with old and finds the new is greater value so $$Var=15 is assigned.

And now this value $$Var shall not change or be over written until i get a value >15. So was my issue where the new values were not assigned as expected with SETVARIABLE().

Thus my solution was to set a minimum value before the session starts, as simple as it is to just make it a override value through Parameter file (.prm file) and so i set it as ‘$$Var=0′. Now my new value gets always the greater value and is assigned to the variable with the latest :)

Hope this makes a good understanding, please put down your questions and comments if any. Thanks!!

Lets have some lights on to the basic Architecture of PowerCenter which is an ABCD learning for any Informatica Developer. Its good we know how and where our Infa objects gets created and how the way it runs in the back end.

So as we might be well aware of the multi-tier architecture of our Informatica, having a Client Server model with different services running together for what we just see on the PowerCenter client as objects created, deleted and run. As a developer, we must be well aware of the Repository database and its purpose, saves all the meta data on the objects in Informatica.

Infa_Repo_Architecture

As in here we have the two major services, Repository Services and the Integration Services. The Repository services helps in communication between the PowerCenter components responding to the PC Client and getting in the Repository Database with a native driver connection for the meta data.

PowerCenter Client uses native protocol to communicate with the PowerCenter Repository Service and PowerCenter Integration Service.The PowerCenter Client uses ODBC to connect to source and target databases.

How it works:

When we use the PC Client, and we create a Domain connection with a gateway from the client tool we have the service manager in the server which then connects to the PowerCenter Repository Services with a TCP/IP connection established. Also we define the Repository Database which is defined while installation of Informatica and is connected with a native driver connection from the Repository services to the Database.

Every time a job/task runs it is run through the Client connecting with the Integration service which in turn interacts with the Repository services to get the Repository objects from the Repository Database. It uses TCP/IP to connect to the associated PowerCenter Repository Service and retrieve metadata.

The ODBC Connection in the Client/Integration services:
When ever we try creating source/target on our client machine we have an independent ODBC connections created through the client machine and the Integration Services runs with the similar way by connecting to the source through a ODBC/Native connection established and defined already.

The Flow:

The flow runs as, when we connect to a client (eg. Workflow Manager) a Repository connection is made with the Repository services which retrieves meta data connecting to the Repository database as per the request from the client. Then as we run a workflow, the Client connects to the Integration services and the Command is fetched by the Integration service to connect with the Source/Target through an ODBC connection to fetch the data or write the output. The IS then interacts with the Repository services for the Infa object metadata and the process runs as defined in the mapping in the Integration services.

Hope this gives a bit of clarity on the backstage working of our Informatica. Do let me know your suggestions or corrections and queries on this article.

Hello Friends, Time for some learning. This time i have taken up the Informatica Variables and Parameters which i am very much fond of :)

This time i have taken up something that i came across in one of the development my team was involved in, and there was a need to pass a value from one task/session to another as we had a check with respect to the value we get from one session to be used going forward in the process.

blog_banner_session_Variable_assignment

For instance, my requirement is to check on the count of records from session_1 and then to count on records in session_2 but with the same counter (increment from where the Session_1 count ended) ending up with the total count of records. Something like, i had 100 records in session_1 and then i have to count from there as 100+1 and so on for the session_2. Looks simple, but quite a nail biting and challenging experience while we try getting it in Informatica!!

And then we came across this approach/feature available in the Informatica ‘Pre-Session/Post-Session Variable Assignment’. Which was all about the assigning of Variable values between the Sessions and it’s Parent Worklet/Workflow and vice-versa. Interesting right? If not clear lets go with a step by step illustration for a better understanding on this.

1. All we need to do is define a Counter variable/Parameter $$v1_Count for the session_1, and then our session runs to populate $$v1_Count as a usual Mapping Variable/Parameter does.

2. Next is, to assign this value to the common Variable (that acts as an intermediate between two of our sessions) which will be the Parent Worklet/Workflow Variable defined as any other Variable in the Edit Workflow/Worklet Options. In my case i have Workflow variable $$wf_v_Count defined.

3. Now in the session_1, Components tab -> lets edit the “Post session Variable assignment” option (it can be on failure or on success),

(i) Post-session on success variable assignment
(ii) Post-session on failure variable assignment

and i define it as - “Parent_Workflow/Worklet_Variable = Mapping_Variable/Parameter”

i.e., $$wf_v_Count = $$v1_Count

So here we have the $$v1_Count value open for any task,session or Worklet under the same parent can take up and use it for processing.

4. In our case, for the next session_2, i have to go for Components tab -> “Pre Session Variable Assignment” to be defined to assign the value from the $$wf_v_Count to the respective session’s Mapping Variable/Parameter $$v2_Count.

So we define it as “Mapping_Variable/Parameter = Parent_Workflow/Worklet_Variable”

i.e., $$v2_Count = $$wf_v_Count

The same can be used all over the Parent Workflow/Worklet, in any decision tasks or on link conditions. Thus we have now passed on the value from one session to the other and can be processed thereafter. Hope this was an interesting and helpful piece to you.

Do post your comments on any corrections or queries with respect to this. Share this if it can help someone.

Hello friends..

I have been trying to get on with this article for a long time, but have been moving around with other commitments. Finally time to share a interesting topic with all Informatica nerds.

Not so functional or a logical thing i am here with to share with you guys, it was just about how i made use of Sorter and never meant to sort things!

Confused?? OK without much ado lets get on with this..

I was in a need to compare 2 results from same source (SQL Table) and then do an update on the same, kind of a data clean up activity i was involved into. And so i was just done with my mapping design and then just went on to run the session as well. But to my surprise, all i see was the queries, 2 source tables was competing with each other and also there was an insert/update on the target end leading to lot of pressure onto the DB and hence resulting in the process to hang with data load literally halting.

Using_Sorter_Not_For_Sorting

Then a bulb glows (a bit late though).. and the thought was, why cant i tap the data at one point and once it reaches, then to open the tap. But how?

As we all know we have ‘The Integration Service passes all incoming data into the Sorter transformation before it performs the sort operation.’ I just made use of this feature, and as you all might have got it by now, i used 2 sorters at the 2 source flows coming in and joining and then once the source is read with different SQ SQL overrides, all the transformations are performed and finally the data is written on to the same SQL table.

Hope this gets as a Savior to you too at a similar point when you are helpless!!

Share and help others learn with a joy :)

When i first came across using the Normalizer, all i was thinking about is just passing multiple values in one row of several columns to multiplly/normalise the source to several rows. Say i have a table source tracking down each Customer’s Instalment on a monthly basis as like, And now my requirement to put these down as different transactions to the target, then as we normalise it through this transformation which comes in handy here.

All we might know

So this was an assumption on what can be the limit of this transformation is just to Normalise the de normalised data, but then this was something i got to know that we can make a big use of some more features of this transformation.The Normalizer tab in the transformation where we define the Ports which are to pass through and doing so we also define the
‘Occurence’ and this is where we define the Column which has to be Normalized from multiple column to rows. As in our instance i define the Monthly amount Column, where i shall have the Occurence given as 12.

Understanding the GCID and GK ports -

Now my Normalizer shall have 12 inputs and the output port for these shall be 12 rows with different Amount values and other Column being of same value. Also i will need to have a track on the months and would like to track one extra column which i can define as Month_Name which has to give the Month. So how i do this, quite simple – as we can see in the port tab along with the 12 Ports of Month_Amount i have a new port auto created as’GCID_(the reoccuring_Column_name)’. This represents a Integer value (Generated COlumn ID). This will be repetitive values of 1 to 12 for each Amount with respect to the month.

Now all we have to do is pull this port also to the next Expression transformation and have your expression to define the Month name depending upon the GCID value. Also.. also we have the another port created by default in the Normalizer for the respective multiple occuring Column GK_{column_name}. This is the Generated
Key column which is a Key column can be used if required for a unique key value.

VSAM and the Pipeline Type:

Also the Normalizer i have used as a transformation was only the Pipeline transformation as what we just discussed in the above example instance. There als is a type known as VSAM Normalizer, which is just a Saource Qualifier like transformation for a COBOL source (COBOL VSAM source). Here the VSAM COBOL source type does gives a de normalized data which are then Normalized through the VSAM Normalizer.The VSAM Normalizer does not allows to edit the ports and the the Normalizer tab is just read only. A VSAM Normalizer transformation has one input port for a multiple-occurring column unlike the Pipeline Normalizer which has multiple input ports for the multiple occuring value.

Have not got a chance to wet my hands with VSAM type, hope to do some day soon and shall update on many more such experiences. Untill then take care and a happy learning :)

This is something i always used to get myself in confusion and so in trouble. Hell you feel when you try debugging why my mapping works wrong :(

This time around when i was all again with the same bug getting on top of my head, and good enough to frustrate me. And as usual i was able to make out the cause, and
by EOD i decided ‘this has to go on to the blog’.

The issue was, i was getting a blank space in the target where i was expecting some string data value to flow in, and on debugging the mapping when i got close to the
source qualifier to find that the source value flowing was pulled as ’0′. I was under an assumption, if i include a SQL over ride in my Source qualifier the the port
order/port name/port data type shall not be considered as in the SQ. I could understand this was happening for only one column where a string value was flowing as an
integer ’0′ value.

And now i could understand that this was all because the column’s data type was integer whereas the respective column in SQL over ride was of string data type. The
rest of the columns had the data types matching though the Port name might be different will not matter.

For a better understanding, let me take an instance:

My Ports as in source qualifier -

Port1 – Integer
Port2 – Ineteger
Port3 – String

whereas my SQL over ride goes like this -

Select Column_A,Column_B from some_table where Column_B=’some value’

the SQL table structure as below:
Column_A – Integer
Column_B – String

And here when we pull the data from source, our mapping shall pull Column_A values in Port_1 rightly, whereas the values for Column_B shall be fetched wrongly with the
data type not matching between the SQ and the SQL query. A string might be pulled as an integer value with data getting wrong.

Important note: we should also make sure to change the Precision in our SQ (we cant change the data type in a SQ, will invalidate our mapping), which shall give out a
warning while session is run but this can be ignored with the right data loading from the source.

So how get this resolved? All we got to do is change the port order in the SQ as i have done for the above instance,

Port1 – Integer
Port3 – String
Port2 – Integer

So now Column_B will be in Port3, data type matching. Also make sure to change the precision value to match with the SQL column’s precision. And all will be set to go
for the data flowing perfectly good.

Hope this helps, share and let others know. Have a bug free day :)

Just in a situation when i was in need of a guide to help me in getting a Bulk import/export done i was struck as i was not getting one good article that can help me in doing. It was all in bits, hard to get them all in one place and get what i expect. Let me take up this and get it down here, hope this helps for a better understanding.

blog_banner_bcp_utility

We had some data in a flat file populated with the and ETL (Informatica) tool and now i have to load the same data to my SQL database table as a bulk load. And to load your data as bulk we have the BCP utility that comes in handy in allowing bulk load from data file to the SQL table and vice verse.

We have both import and export commands in BCP to load data from and to the SQL table. The standard BCP command syntax as follows,

bcp {db_name.schema_name.Table_Name} in/out {File_Path\file_name.txt} -S {server_name} -t ‘field_terminator’ –T

here we have,
-S -> Server Name
-t -> filed terminator (example “/t” for tab delimited, “,” for comma delimited)
in/out -> ‘in’ for import from data file to sql table and ‘out’ for export o data file from the sql table.
-T -> to allow a trusted connection (for non trusted connection we define ‘-U’ user_name and ‘-P’ password)

This is the simple bcp command that can help you load data faster and in an elegant way.

There is one add on feature or a parameter that you can define for the bcp command which can make the load from and to the data file more tightly mapped and Provides a flexible system for writing data files that requires little or no editing to comply with other data formats.

We need to create a format file before loading the data with the bcp command. And the syntax as follows,

bcp {db_name.schema_name.Table_Name} format nul -c -f {Format_File_Path\file_name.fmt} -S {server_name} –T

format nul -f -> to define the path and the format file name to be created (.fmt or .xml file)

We can have both an xml and a non xml format file, here i go with a non xml format file which gives a structure that looks as
below,

blog_banner_format_file

Hope this helps in getting the bulk load for your data to and from a data file to the sql table. Learn and share :)

Follow

Get every new post delivered to your Inbox.

Join 775 other followers

%d bloggers like this: