Datastage Points

Embed Size (px)

Citation preview

  • 8/10/2019 Datastage Points

    1/26

    DATASTAGE POINTS

    You can view and modify the table de nitions at any point during the design of your application.

    1) Server jobs wre doesnt support the partion tehnics but parallel jobs support the partiontechnics.

    2) Server jobs are not support S !"# "" but parallel supports S !"# "".

    $) Server jobs are running in single node but parallel jobs are running in multiple nodes.

    %) Server jobs prefer while geting source data is low but data is huge then prefer the parallael.

    What is metadata?

    &ata about data. ' table de nition which describes the structure of the table is ane(ample of meta data.

    What is the diference between maps and locales?

    aps &e nes the character sets that the project can use.

    *ocales &e nes the local formats for dates# times# sorting order# and so on that the project canuse.

    What is the diference between DataStage and In ormatica?

    &ataStage support parallel processing which +nformatica doesnt.

    *in,s are object in the &ataStage #in +nformatica its a port to port connectivity.

    +n +nformatica its easy to implement Slowly -hanging &imensions which is little bit comple( indataStage.

    &ataStage doesnt support complete error handling.

    What are the types o stage?

    ' stage can be passive or active. ' passive stage handles access to databases for thee(traction or writing of data. 'ctive stages model the ow of data and provide mechanisms forcombining data streams# aggregating data# and converting data from one data type to another.

    !here are two types of stage

    /uilt in stages Supplied with &ataStage and used for e(tracting# aggregating# transforming# orwriting data.

    "lug in stages 'dditional stages de ned in the &ataStage anager to perform tas,s that thebuilt0in stages do not support.

    How can we improve the per ormance in DataStage?

    +n server canvas we can improve performance in two ways

  • 8/10/2019 Datastage Points

    2/26

    irstly we can increase the memory by enabling interprocess row bu ering in job properties

    Secondly by inserting an +"- stage we brea, a process into two processes.3e can use thisstage to connect two passive stages or two active stages.

    What is APT !"#$I% in datastage?

    &atastage understands the architecture of the system through this le4'"!5-67 +85 +*9).

    or e(ample this le consists information of node names# dis, storage information etc.'"!5-67 +8 is just an environment variable used to idetify the :.apt le. &ont confuse that with:.apt le that has the node;s information and -on guration of S "< " server.

    What are Se&'encers?

    Se=uencers are job control programs that e(ecute other jobs with preset job parameters.

    How do yo' register pl'g(ins?

    >sing &ataStage anager.

    What are the command line 'nctions that import and e)port the DS *obs?

    dsimport.e(e imports the &ataStage components.

    dse(port.e(e e(ports the &ataStage components.

    14 Good design tips in Datastage

    1) 3hen you need to run the same se=uence of jobs again and again# better create a se=uencerwith all the jobs that you need to run. ?unning this se=uencer will run all the jobs. You can

    provide the se=uence as per your re=uirement.2) +f you are using a copy or a lter stage either immediately after or immediately before atransformer stage# you are reducing the e@ciency by using more stages because a transformerdoes the job of both copy stage as well as a lter stage

    $) >se Sort stages instead of ?emove duplicate stages. Sort stage has got more grouping optionsand sort indicator options.

    %) !urn o ?untime -olumn propagation wherever its not re=uired.

    A) a,e use of odify# ilter# and 'ggregation# -ol. 8enerator etc stages instead of !ransformer

    stage only if the anticipated volumes are high and performance becomes a problem. 6therwiseuse !ransformer. +t is very easy to code a transformer than a modify stage.

    B)'void propagation of unnecessary metadata between the stages. >se odify stage and dropthe metadata. odify stage will drop the metadata only when e(plicitly speci ed using &?6"clause.

    C)'dd reject les wherever you need reprocessing of rejected records or you thin, considerabledata loss may happen. !ry to ,eep reject le at least at Se=uential le stages and writing to&atabase stages.

  • 8/10/2019 Datastage Points

    3/26

    D) a,e use of 6rder /y clause when a &/ stage is being used in join. !he intention is to ma,euse of &atabase power for sorting instead of &ata Stage resources. Eeep the join partitioning as'uto. +ndicate dont sort option between &/ stage and join stage using sort stage when usingorder by clause.

    F)3hile doing 6uter joins# you can ma,e use of &ummy variables for just 7ull chec,ing instead of fetching an e(plicit column from table.

    1G)&ata "artitioning is very important part of "arallel job design. +ts always advisable to havethe data partitioning as H'uto unless you are comfortable with partitioning# since all &ata Stagestages are designed to perform in the re=uired way with 'uto partitioning.

    11) &o remember that odify drops the etadata only when it is e(plicitly as,ed to do so usingE99"

  • 8/10/2019 Datastage Points

    4/26

    5 Job "arameters should always be used for le paths# le names# database login settings.

    5 StandardiIed 9rror Kandling routines should be followed to capture errors and rejects.

    Component Usage

    !he following guidelines should be followed when constructing parallel jobs in +/ +nfoSphere&ataStage 9nterprise 9dition

    5 7ever use Server 9dition components 4/'S+- !ransformer# Server Shared -ontainers) within aparallel job. /'S+- ?outines are appropriate only for job control se=uences.

    5 'lways use parallel &ata Sets for intermediate storage between jobs unless that speci c dataalso needs to be shared with other applications.

    5 >se the -opy stage as a placeholder for iterative design# and to facilitate default typeconversions.

    5 >se the parallel !ransformer stage 4not the /'S+- !ransformer) instead of the ilter or Switchstages.

    5 >se /uild6p stages only when logic cannot be implemented in the parallel !ransformer.

    DataStage Datatypes

    !he following guidelines should be followed with &ataStage data types

    5 /e aware of the mapping between &ataStage 4SL*) data types and the internal &S

  • 8/10/2019 Datastage Points

    5/26

    'ny stage that processes groups of related records 4generally using one or more ,ey columns)must be partitioned using a ,eyed partition method.

    !his includes# but is not limited to 'ggregator# -hange -apture# -hange 'pply# Join# erge#?emove &uplicates# and Sort stages. +t might also be necessary for !ransformers and /uild6psthat process groups of related records.

    _ Objective

    >nless partition distribution is highly s,ewed# minimiIe re0partitioning# especially in cluster or8rid con gurations.

    ?e0partitioning data in a cluster or 8rid con guration incurs the overhead of networ, transport.

    _ Objective 4

    "artition method should not be overly comple(. !he simplest method that meets the aboveobjectives will generally be the most e@cient and yield the best performance.

    >sing the above objectives as a guide# the following methodology can be applied

    a. Start with 'uto partitioning 4the default).

    b. Specify Kash partitioning for stages that re=uire groups of related records

    as follows

    M Specify only the ,ey column4s) that are necessary for correct grouping as long as the numberof uni=ue values is su@cient

    M >se odulus partitioning if the grouping is on a single integer ,ey column

    M >se ?ange partitioning if the data is highly s,ewed and the ,ey column values and distributiondo not change signi cantly over time 4?ange ap can be reused)

    M +f grouping is not re=uired# use ?ound ?obin partitioning to redistribute data e=ually across allpartitions.

    M 9specially useful if the input &ata Set is highly s,ewed or se=uential d. >se Same partitioningto optimiIe end0to0end partitioning and to minimiIe re0partitioning

    M /e mindful that Same partitioning retains the degree of parallelism of the upstream stage

    M 3ithin a ow# e(amine up0stream partitioning and sort order and attempt to preserve for down0

    stream processing. !his may re=uire re0e(amining ,ey column usage within stages and re0ordering stages within a ow 4if business re=uirements permit).

    'cross jobs# persistent &ata Sets can be used to retain the partitioning and sort order. !his isparticularly useful if downstream jobs are run with the same degree of parallelism 4con guration

    le) and re=uire the same partition and sort order.

    Collecting Data

    8iven the options for collecting data into a se=uential stream# the following guidelines form amethodology for choosing the appropriate collector type

  • 8/10/2019 Datastage Points

    6/26

    1. 3hen output order does not matter# use 'uto partitioning 4the default).

    2. -onsider how the input &ata Set has been sorted

    N 3hen the input &ata Set has been sorted in parallel# use Sort erge collector to produce asingle# globally sorted stream of rows.

    N 3hen the input &ata Set has been sorted in parallel and ?ange partitioned# the 6rderedcollector might be more e@cient.

    $. >se a ?ound ?obin collector to reconstruct rows in input order for round0robin partitioned input&ata Sets# as long as the &ata Set has not been re0partitioned or reduced.

    Sorting Data

    'pply the following methodology when sorting in an +/ +nfoSphere &ataStage 9nterprise 9ditiondata ow

    1. Start with a lin, sort.

    2. Specify only necessary ,ey column4s).

    $. &o not use Stable Sort unless needed.

    %. >se a stand0alone Sort stage instead of a *in, sort for options that are not available on a *in,sort

    N !he O?estrict emory >sageP option should be included here. +f you want more memoryavailable for the sort# you can only set that via the Sort Stage Q not on a sort lin,. !heenvironment variable

    R'"!5!S6?!5S!?9SS5/*6-ES+ 9 can also be used to set sort memory usage 4in /) perpartition.

    N Sort Eey ode# -reate -luster Eey -hange -olumn# -reate Eey -hange -olumn# 6utputStatistics.

    N 'lways specify O&ataStageP Sort >tility for standalone Sort stages.

    N >se the OSort Eey odeT&ont Sort 4"reviously Sorted)P to resort a sub0grouping of apreviously0sorted input &ata Set.

    A. /e aware of automatically0inserted sortsN Set R'"!5S6?!5+7S9?!+675-K9-E567*Y to verify but not establish

    re=uired sort order.

    B. inimiIe the use of sorts within a job ow.

    C. !o generate a single# se=uential ordered result set# use a parallel Sort and a

    Sort erge collector.

  • 8/10/2019 Datastage Points

    7/26

    Stage Speci!c Guidelines

    "rans#ormer

    !a,e precautions when using e(pressions or derivations on nullable columns within the parallel !ransformer

    N 'lways convert nullable columns to in0band values before using them in an e(pression orderivation.

    N 'lways place a reject lin, on a parallel !ransformer to capture < audit possible rejects.

    $oo%up

    +t is most appropriate when reference data is small enough to t into available shared memory. +fthe &ata Sets are larger than available memory resources# use the Join or erge stage.

    *imit the use of database Sparse *oo,ups to scenarios where the number of input rows issigni cantly smaller 4for e(ample 1 1GG or more) than the number of reference rows# or whene(ception processing.

    &oin

    /e particularly careful to observe the nullability properties for input lin,s to any form of 6uter Join. 9ven if the source data is not nullable# the non0,ey columns must be de ned as nullable inthe Join stage input in order to identify unmatched records.

    'ggregators

    >se Kash method 'ggregators only when the number of distinct ,ey column values is small. 'Sort method 'ggregator should be used when the number of distinct ,ey values is large orun,nown.

    Database Stages

    !he following guidelines apply to database stages

    N 3here possible# use the -onnector stages or native parallel database stages for ma(imumperformance and scalability.

    N !he 6&/- -onnector and 6&/- 9nterprise stages should only be used when a native parallelstage is not available for the given source or target database.

    N 3hen using 6racle# &/2# or +nformi( databases# use 6rchestrate Schema +mporter 4orchdbutil)to properly import design metadata.

    N !a,e care to observe the data type mappings.

    Datastage Coding C(ec%list

    9nsure that the null handling properties are ta,en care for all the nullable elds. &o not set thenull eld value to some value which may be present in the source.

    9nsure that all the character elds are trimmed before any processing. 7ormally e(tra spacesin the data may lead to some errors li,e loo,up mismatch which are hard to detect.

  • 8/10/2019 Datastage Points

    8/26

    'lways save the metadata 4for source# target or loo,up de nitions) in the repository to ensurere usability and consistency.

    +n case the partition type for the ne(t immediate stage is to be changed then the H"ropagatepartition should be set to H-lear in the current stage.

    a,e sure that appropriate partitioning and sorting are used in the stages# where everpossible. !his enhances the performances. a,e sure that you understand the partitioning being

    used. 6therwise leave it auto. a,e sure that the pathnamese %0node con guration le for unit testing

  • 8/10/2019 Datastage Points

    9/26

    9nsure that reject lin,s are output from the se=uential le stage which reads the data le tolog the records which are rejected.

    -hec, whether the dataset are used instead of se=uential le for intermediate storagebetween the jobs. !his enhances performance in a set of lin,ed jobs.

    ?eject records should be stored as se=uential les. !his helps in the analysis of rejectedrecords outside the datastage easier.

    9nsure that the dataset from another job use the same metadata which is saved in therepository.

    Uerify that the intermediate les that are used by downstream jobs have uni( readaccessse >pper -ase for column names and table names in SL* =ueries.

    -hec, that the parameter values are assigned to the jobs through se=uencer

    or every Job 'ctivity stage in se=uencer# ensure that O?eset if re=uired# then runP is selectedwhere relevant.

  • 8/10/2019 Datastage Points

    10/26

    )no* your DataStage &obs Status *it(out Director

    VW!K6? 'tul Singh V

    V &'!9 Jan G%# 2G1$ V

    V V

    V V

    V "*'! 6? 4'+X# K"0>X# *inu(# Solaris 'll 7i( ) V

    V V

    V V

    V ">?"6S9 !his script ta,e the 2 input as argument and fetch the V

    V datastage job status and last Start 9nd time of the job V

    V V

    V V

    VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV

    .

  • 8/10/2019 Datastage Points

    11/26

    outT_dsjob 0jobinfo R"?6J9-! RJ6/ ` egrep ;Job Status`Job Start !ime`*ast ?un !ime;_

    echo R"?6J9-! tRJ6/ tRout^]

    else

    echo ^"lease e(ecute the script li,e RG "?6J9-!57' 9 J6/57' 9^]

    "ips + "ric%s #or debugging a DataStage job

    !he article tal,s about &ataStage debugging techni=ues. !his can be applied to a job which

    is not producing proper output data or

    to a job that is aborting or generating warnings

    >se the &ata Set anagement utility# which is available in the !ools menu of the &ataStage&esigner or the &ataStage anager# to e(amine the schema# loo, at row counts# and delete a"arallel &ata Set. You can also view the data itself.

    -hec, the &ataStage job log for warnings or abort messages. !hese may indicate anunderlying logic problem or une(pected data type conversion. -hec, all the messages. !he "X

    jobs almost all the times# generate a lot of warnings in addition to the problem area.

    ?un the job with the message handling 4both job level and project level) disabled to nd out ifthere are any warning that are unnecessarily converted to information messages or droppedfrom logs.

    9nable the '"!5&> "5S-6?9 using which you would be able see how di erent stages arecombined. Some errors7!S environment variables. 'lso enable6SK5"?+7!5S-K9 'S to ensure that a runtime schema of a job matches the design0time schemathat was e(pected.

    Sometimes the underlying data contains the special characters 4li,e null characters) indatabase or les and this can also cause the trouble in the e(ecution. +f the data is in table ordataset# then e(port it to a se=uential le 4using &S job). !hen use the command Ocat NtevP orOod N(cP to nd out the special characters.

    6nce can also use Owc 0lc lenameP# displays the number of lines and characters in thespeci ed 'S-++ te(t le. Sometime this is also useful.

    ,odular approac( +f the job is very bul,y with many stages in it and you are unable tolocate the error# the one option is to go for modular approach. +n this approach# one has to do thee(ecution step by step. 9.g. +f a job has 1G stages# then create a copy of the job. Just ,eep say

  • 8/10/2019 Datastage Points

    12/26

    rst $ stages and run the job. -hec, the result and if the result is ne# then add some morestages 4may be one or two) and again run the job. !his has to be done till one is unable to locatethe error.

    "artitioned approach with data !his approach is very useful if the job is running ne for someset of data and failing for other set of data# or failing for large no. of rows. +n this approach# onehas to run the jobs on selected no .of rows and4and "'?!+!+677> in "(). 9.g. a job when run with 1GE rows wor,s ne and is failing with 1rows. 7ow one can use +7?637> and run the job for say rst G.2A million rows. +f the rstG.2A million are ne# then from G.2B million to G.A million and so on.

    "lease note# if the job parallel job then one also has to consider the no. of partitions in the job.

    6ther option in such case is N run the job only one node 4may be by setting using'"!59X9->!+675 6&9 to se=uential or using the con g le with one node.

    9(ecution mode Sometime if the partitions are confusing# then one can run the job inse=uential mode. !here are two ways to achieve this

    >se the environment variable '"!59X9->!+675 6&9 and set it to se=uential mode.

    >se a con guration le with only one node.

    ' parallel Job fails and error do not tell which row it has failed for +n this case# if this job issimple we should try to build the server job and run it. !he server jobs can report the errorsalong with the rows which are in error. !his is very useful in case when &/ errors li,eprimary

  • 8/10/2019 Datastage Points

    13/26

    or information about con guring the 6&/- environment for your speci c database# see the &ata&irect &rivers ?eference manual odbcref.pdf le located in theR&SK6 9

  • 8/10/2019 Datastage Points

    14/26

  • 8/10/2019 Datastage Points

    15/26

    if [ 0I ^R&S?"-&5"6?!57> /9?^ \

    then

    true

    VV&S?"-&5"6?!57> /9?5!'8VV

    if [ 0I ^R'"!56?-KK6 9^ \

    then

    '"!56?-KK6 9T

  • 8/10/2019 Datastage Points

    16/26

    V*'78T^ langdefZ^]e(port *'78

    V*-5'**T^ langdefZ^]e(port *-5'**

    V*-5-!Y"9T^ langdefZ^]e(port *-5-!Y"9

    V*-5-6**'!9T^ langdefZ^]e(port *-5-6**'!9

    V*-5 679!'?YT^ langdefZ^]e(port *-5 679!'?Y

    V*-57> 9?+-T^ langdefZ^]e(port *-57> 9?+-

    V*-5!+ 9T^ langdefZ^]e(port *-5!+ 9

    V*-5 9SS'89ST^ langdefZ^] e(port *-5 9SS'89S

    V 'dded by Eev

    *'78T^en5>S^]e(port *'78

    *-5'**T^975>S.>! 0D^]e(port *-5'**

    *-5-!Y"9T^975>S.>! 0D^]e(port *-5-!Y"9

    *-5-6**'!9T^975>S.>! 0D^]e(port *-5-6**'!9

    *-5 679!'?YT^975>S.>! 0D^]e(port *-5 679!'?Y

    *-57> 9?+-T^975>S.>! 0D^]e(port *-57> 9?+-

    *-5!+ 9T^975>S.>! 0D^]e(port *-5!+ 9

    *-5 9SS'89ST^975>S.>! 0D^] e(port *-5 9SS'89S

    V 9nd of addition

    V 6ld libpath

    V *+/"'!KT_dirname R&SK6 9_

  • 8/10/2019 Datastage Points

    17/26

    ulimit 0d unlimited

    ulimit 0m unlimited

    ulimit 0s unlimited

    ulimit 0f unlimited

    V below changed to unlimited from 1G2%

    ulimit 0n unlimited

    *&?5-7!?*T 'X&'!'TG(BGGGGGGG >S9??98S

    e(port *&?5-7!?*

    V 8eneral "ath 9nhancements

    "'!KTR"'!K p forV ultiple instance connection

    VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV

    &/2+7S!'7-9Tdb2] e(port &/2+7S!'7-9

    &/2&+?T

  • 8/10/2019 Datastage Points

    18/26

    "'!KTR"'!K R+7S!K6 9?-9S\

    localuvZ

    &/ S!Y"9 T >7+U9?S9

    networ, T !-"

  • 8/10/2019 Datastage Points

    19/26

    &/ S!Y"9 T 6&/-

    6racleDZ

    &/ S!Y"9 T 6&/-

    +nformi(Z

    &/ S!Y"9 T 6&/-

    Setting environment variables #or t(e parallel engine in DataStage

    You set environment variables to ensure smooth operation of the parallel engine. 9nvironmentvariables are set on a per0project basis from the 'dministrator client.

    "rocedure

    1. -lic, Start Z 'll "rograms Z +/ +nformation Server Z +/ +nfoSphere &ataStage andLualityStage 'dministrator# and log in to the 'dministrator client.

    2. -lic, the "roject tab# and select a project.

    $. -lic, "roperties.

    %. 6n the 8eneral tab# clic, 9nvironment.

    A. Set the values for the environment variables as necessary.

    .nvironment variables #or t(e parallel engine

    Set the listed environment variables depending on whether your environment meets the

    conditions stated in each variable.7etwor, settings

    1. '"!5+65 'X+ > 56>!S!'7&+78

    +f the system connects to multiple processing nodes through a networ,# set the'"!5+65 'X+ > 56>!S!'7&+78 environment variable to specify the amount of memory# inbytes# to reserve for the parallel engine on every node for !-"

  • 8/10/2019 Datastage Points

    20/26

    !he '"!5S97&/> S+ 9 and '"!5?9-U/> S+ 9 values are the same. +f you set one of theseenvironment variables# the other is automatically set to the same value. !hese environmentvariables override the '"!5+65 'X+ > 56>!S!'7&+78 environment variable that sets the totalamount of !-" S+ 9

    +f any of the stages within a job has a large number of communication lin,s between nodes#

    specify this environment variable with the !-" S+ 9 and '"!5?9-U/> S+ 9 values are the same. +f you set one of theseenvironment variables# the other is automatically set to the same value. !hese environmentvariables override the '"!5+65 'X+ > 56>!S!'7&+78 environment variable that sets the totalamount of !-"

  • 8/10/2019 Datastage Points

    21/26

    How to deploy a configuration file in DataStage

    Hi FriendsNow how to Deploy/Apply the Conf file.

    Deploying the new configuration file

    Now that you have created a new configuration file, you use this new file instead of the default file. You use theAdministrator client to deploy the new file. You must have Data tage ! Administrator privileges to use the Administrator clientfor this purpose.

    "o deploy the new configuration file#. elect Start %Programs %IBM Information Server %IBM WebSphere DataStage and QualityStage Administrator .

    &. 'n the Administration client, clic( the Pro ects tab to open the )ro*ects window.

    . 'n the list of pro*ects, select the tutorial pro*ect that you are currently wor(ing with.

    . Clic( Properties .

    'n the eneral ta of the )ro*ect )roperties window, clic( !nvironment .

    . 'n the "ategories tree of the 1nvironment varia les window, select the )arallel node.

    http://2.bp.blogspot.com/-dRgrz3MAjks/UAEpEPxJqvI/AAAAAAAAAU0/o9Qii6N-DJo/s1600/b.JPGhttp://2.bp.blogspot.com/-ITS3JVRccaI/UAEpDYfIZBI/AAAAAAAAAUs/3Co8sKDUKxw/s1600/a.JPG
  • 8/10/2019 Datastage Points

    22/26

    . elect the AP#$"%&'I($'I)! environment varia le, and edit the file name in the path name under the *alue columnheading to point to your new configuration file. "he 1nvironment varia les window should resem le the one in the followingpicture#

    You deployed your new configuration file.

    Applying the new configuration fileNow you run the sample *o again.You will see how the configuration file overrides other settings in your *o design.

    "o apply the configuration file#. 3pen the Director client and select the sample *o .

    &. Clic( 44 utton to reset the *o so that you can run it again.. 5un the sample *o .

    How to Create a Configuration File in DataStage

    Hi FriendsConfiguration file is playing a most important role in )arallel Data tage *o . "his is the file which have provide parallelenvironment to *o to run parallel.

    "oday ' am going to share how to Create/1dit the Conf file.

    "reating a configuration file

    %pen the Designer "lient and follow the below steps

    $. elect #ools %"onfigurations to open the Configurations editor.

    http://4.bp.blogspot.com/-o8H8KAWKGPo/UAEpFLW1T5I/AAAAAAAAAU8/3bb-ZCj5dyw/s1600/c.JPG
  • 8/10/2019 Datastage Points

    23/26

  • 8/10/2019 Datastage Points

    24/26

    -.

    Clic( "hec+ to ensure that your configuration file is valid. "he configuration editor should resem le the one in thefollowing picture#

    . 'n the "onfiguration name field of the ave Configuration As window, type a name for your new configuration. For e8ample,type node, .

    . Clic( Save and select Save configuration from the menu.

    Must know for an UNIX programmer- Some tips

    "his article gives an overview of commands which are very useful for e8ecuting comple8 tas(s in simple manner in 9N':environment. "his can provide as reference during critical situations

    Multi line comments in Shell program

    http://1.bp.blogspot.com/-ztyFAoUzu4k/UAEgifc3RoI/AAAAAAAAAUg/e8ncPAlhgNM/s1600/d.JPGhttp://3.bp.blogspot.com/-gYpGSAgVcdQ/UAEgf-S02QI/AAAAAAAAAUY/oB00NatP8uM/s1600/c.JPG
  • 8/10/2019 Datastage Points

    25/26

    As such hell does not provide any multi6line commenting feature. However, there is a wor(around. "o comment a loc( oflines, we need to enclose the statements to e commented y using # ;and t have a single ?uote in the content. 'n order to circumvent that pro lemone can use the H151 document for multi6line comment as given elow#

    $

  • 8/10/2019 Datastage Points

    26/26

    9pon e8ecuting the a ove command, the nsloo(up consults the Domain Name ervers and fetches the ') addresses of thegiven hostname.

    "hanging the timestamp on a file to a past date

    7henever a file is created, is ta(es the time stamp of current system time. "o change the timestamp of files to a past date,let