2 Replies Latest reply: Nov 29, 2017 2:18 AM by Ronja Hall RSS

    group incidents by start and end time

    Ronja Hall



      I am new to Qlikview and thus new to this forum...

      I have been struggling with a script in Qlikview for a while now, where a client wants to group occurences if the end time of one occurence is the same as start time of the next (i.e. they occur directly after each other) They should be grouped by the id that occured first, the id's of the adjacent occurences should then be omitted.


      The source data (database) looks something like this:


      Id                Start Time                                    End Time                                   Reason

      55               2017-02-24 16:06:58.000                2017-02-24 16:26:58.000              delay

      56               2017-02-24 16:26:58.000                2017-02-24 16:56:54.000              switch

      57               2017-02-24 16:56:54.000                2017-02-24 17:15:34.000              personnel issue


      These three occurences should be grouped by id 55, for example renamed to Group_id a QV table should show:


      Group_id          Start Time                              End Time                                         Reason

                             2017-02-24 16:06:58.000         2017-02-24 16:26:58.000                   delay

      55                    2017-02-24 16:26:58.000         2017-02-24 16:56:54.000                   switch

                              2017-02-24 16:56:54.000        2017-02-24 17:15:34.000                    personnel issue



      A former co-worker tried to solve it by finding next and previous occurence with the help of 'previous' function on the time columns and ordering the dataset first desc and then asc (creating temporary tables to feed the next table), creating true/false (1/0) columns for Next and Previous and then grouping the occurences if the value in either or both of those columns are 1,  but it does not give the desired result.


      Below is the first part of the script, where I think the fault lies.  (whenlooking at the result of those temporary tables, the next and previous columns are not populated correctly.



      noconcatenate Load




      if(StartTime=previous(EndTime), 1, 0) as Prev,

      0 as Next

      resident Stop_DATA

      ORDER BY Id




      noconcatenate Load




      if(EndTime=previous(StartTime), 1, 0) as Prev,

      0 as Next

      resident Stop_1


        • Re: group incidents by start and end time
          Antonio Mancini

          May be this

          LOAD * Inline [
          Id, Start Time, End Time, Reason
          55, 2017-02-24 16:06:58.000, 2017-02-24 16:26:58.000, delay
          56, 2017-02-24 16:26:58.000, 2017-02-24 16:56:54.000, switch
          57, 2017-02-24 16:56:54.000, 2017-02-24 17:15:34.000, personnel issue
          155, 2017-03-24 16:06:58.000, 2017-03-24 16:26:58.000, delay
          156, 2017-03-24 16:26:58.000, 2017-03-24 16:56:54.000, switch
          157, 2017-03-24 16:56:54.000, 2017-03-24 17:15:34.000, personnel issue
          LOAD *, If(Peek([End Time])=[Start Time],Peek(Group_ID),Id) as Group_ID
          Resident Temp
          Order By Id;
          Drop Table

            • Re: group incidents by start and end time
              Ronja Hall

              This is great! The grouping works according to start and end times.

              However, I missed a few things in my description of the problem.

              The occurences should only be grouped if they are adjacent in time to an occurence of a particular kind - based on a code/id in another dimension (in the source table), let's call it 'Code'.


              So for example, if occurences with Code '1014' have other occurences occuring right before or after - those are the ones we want to group  with a Group Id. No occurences except 1014 and the ones direcetly before or after should be included in the final dataset. All rows with codes 1014 should be included even if they occur alone - with their original id.