Jan

24

GIPS: A Passport to credibility for fund performance

Posted by: Johan Korteland

As the financial crisis ensues over the world with absolutely no resolution in sight the importance of flawless asset management becomes clearer by the day.

Is this just marketing mumbo jumbo or does it really hold some truth?

As a graphic designer I frequently search the world wide web looking for the latest (web) design techniques, but also to get a grasp of what the purpose of GIPS really is and how a company like Koster Engineering could help asset managers to merge the GIPS standards smoothly into their operations.

The following linked article (published by 'The Hedgefund Journal' in October 2010) still holds true to today. It still is important for major firms to obtain a GIPS certification otherwise they might opt-out of certain RFP's. It is something to think about during the current financial turmoil.

Click here to read the full article by Stuart Fieldhouse, as published by 'The Hedgefund Journal' in October 2010. (http://www.gipsstandards.org/news/inthenews/pdf/the_hedge_fund_journal_gips_article.pdf)

You are of course very welcome to share your thoughts on the subject.

 

Jul

19

Robust Client Reporting Event

Posted by: Martine Dols

As I wrote earlier in our blog, we attended the Marcus Evans Event on Robust Client Reporting. 2.5 days in London, sharing, discussing and challenging ideas and trends on Client Reporting. It´s time to evaluate. Let´s forget the first half day with a workshop on data visualisation, presented with slides that primarily contained words. Laughing

Thursday and Friday were filled with high quality presentations and panel discussions. Main challenges on client reporting that were identified are :

  • increasing volumes 
  • accuracy 
  • timeliness 
  • on-line 
  • compliance 

Both accuracy and compliance were included in our own presentation titled “the art of control”. (For details on this presentation see our other blog posts with this title.) This was great input for our product development. We are already working on mock screens for on-line reporting to be shared later this year.

The audience was small compared to other events and that´s how an intimate atmosphere was created. On Friday afternoon we had a round table discussion evaluating the event and discussing how to cope with above mentioned challenges. I was impressed by how openly each participant shared his/her concerns. The participant´s expectations of learning and exchanging ideas were definitely met. Our goals to talk to the participants, learn about their client reporting issues and have a great time were achieved as well. Thanks to the organisation that showed great flexiblity and service orientation during the event.

 

Jul

19

The Art of Control (part 3 of 3)

Posted by: Francois Koster

The 3 Elements of Integrated Control

1. Completeness
The first, most obvious and most commonly used, control area is to check whether all the required data for a report is present.

Primary and secondary data dependencies
Identifying the main data required by the report is a key aspect in determining the completeness. The primary data dependencies are normally relatively straightforward to check for missing values. Secondary data dependencies are a little more complicated. This can be compounded if this data does not live in the system that you are reporting from. As an example, we could take the valuation of a portfolio. On a primary level we receive the net asset value, on a secondary level this is derived from the value of the individual positions of that portfolio. If a price is missing or a cash flow is not booked then we have complete but incorrect data. In such cases, it is essential that the secondary level data is available to check or clearly indicates an error which can be caught by a quality control process.Static and Reference data

Probably the largest cause of the "small inconsequential change" phenomenon. Changes to static data can cause reporting queries to loose significant chunks of results. Similarly, reference data definitions, for example portfolio reporting currency, can wreak havoc on a report if they are not defined. All static and reference data requirements should be validated to ensure report completeness.

Gaps

Not all missing data is readily identifiable as the frequencies may be irregular. If these frequencies are known, then it should be possible to check a data set for any gaps.

2. Quality

The second area, that is less often applied, is to validate the quality level of the data behind the report.Readiness of data
Integration the control into the report benefits from being bound also to the main control process. Having indicators to let the report know that the base data is available and has been validated prevents people from having to analyze problems only to find out that they simply executed the report too early.
Additionally, this is an excellent of flagging a report as a "first cut" or "early indications" report and making sure that this information remains clearly identifiable to any person that the report is distributed to.Anomalies

Checking for data anomalies is very report and system specific. Where possible a trend should be established and then a plausibility check made against that trend. This can be, for example, a portfolio return that lies beyond a certain deviation point when compared to the average return of similarly managed portfolios. Or an instrument return that deviates strongly from its associated benchmark using differing boundaries for monthly, quarterly and yearly returns. Or an unexpected volatility in a series of currency rates.

Duplication
A great way to mess up any report or query is to add a duplicate record. In most cases this creates the lovely effect of doubling all values. Such problems are usually easy to spot on a report but can be very troublesome to locate in the base data. More subtle data duplications may cause only minor shifts and go more easily unnoticed.Neutralizing known inconsistencies
This is an essential part of the control mechanism and without it everything falls apart. The simple fact is that many of the discrepancies found by using the above controls might simply reflect reality and are in fact correct or nothing can be done about it. The consequence is that you have a report with an ever growing list of errors, which people simply start ignoring and new problems get lost amongst the known problems. Having the ability to register known and verified problems, preferably with some form of comment explaining the issue, allows you to eliminate or expand upon repetitive errors from the reports. This means that when someone sees an error on a report they know it is something that has not already been analyzed.

3. Reconciliation

Probably the most neglected, but most powerful, control mechanism that truly comes into its own as an integrated control.

It's all about cross-checking and is the best way to check consistency within a report.

Look for values that should sum up to another value that is independently calculated.

Reconciliation is a very report specific area, but when used it can be a very strong validation mechanism. Identifying values on a report that are related provides assurance that all these values are properly present on the report. Sometimes simply adding some independently retrieved control figures can confirm the quality of the report.
For example, in a contribution report we assume that the sum of the contributions should correspond to the portfolio return. An attribution report (depending on method) could use the sum of the effects in comparison to the relative return. A portfolio report could use the sum of the net asset values in comparison to the total assets under management of the firm.
The more connections you can determine, the better the validation of the report will be. As an added bonus, most of the additional data needed to check the values are usually summary values and are quick to retrieve while the detail data is already present in the report itself.
While your other controls check the quality of the underlying data, this control checks whether the data is properly handled within the report. This is an ideal method for protecting against queries that go wrong due to configuration and reference data changes as well as protecting against calculation errors.

Conclusion

With ever increasing pressure on the time to market of reports and the increasing flexibility of reporting systems, there is a proportional increase in the risk of control process breaks. A few simple integrated controls can be implemented to support the standard control process and mitigate the regulatory and reputational risks. With integrated controls you can easily block reports from being distributed to customers (internal or external) in case of erroneous data or provide internal first-cut reports safe in the knowledge that the quality level of the data is accurately represented.

 

Jul

12

The Art of Control (part 2 of 3)

Posted by: Francois Koster

A Second Line of Defence

How can we avoid this?
By integrating the control process within the report itself. When designing or modifying a report, the controls are established. Why not execute them with the report itself and summarize the results? Inside the report, it is possible to identify very precise control criteria which can be used to make the report self-validating.

How do I know if my report is correct?
Because the report will tell me if any errors are present. By holding the control information in the report, you gain the ability to flag the report as having errors or potential issues. Additionally, since the report may well be restricted to a single entity, you can also show a detailed error message which might allow the user the option to correct the error themselves without having to get access to a specialist who can analyze the source of the reporting problem.

...but don't neglect your traditional controls.

The above mechanism should be a second line of defence and you should still use your traditional controls as a primary method. Maintaining these controls allows some feedback into the secondary controls and also on a very simple level, allows you to pre-inform users that reports will have errors because the control process is incomplete.

...and don't forget to control your control reports.

A simple, but commonly occurring, issue is that the control process fails because the controls reports themselves have errors. Applying the same logic inside the control reports themselves provides a higher degree of confidence that problems have been identified and handled.

Pros and Cons

Cons? The following points are listed as cons, but it is perhaps worthwhile challenging if this is truly the case.

· Longer time to market

Since more analysis is required, it will take longer to create the report although if reports are built from common blocks (like a dashboard) then the controls should already have been established and no re-work is required. But I think it is inherently dangerous to produce reports without, at least somewhere, an analysis of its dependencies, so the reality is that should have been spent anyway and the real issue is to avoid duplication of effort.

· Increased Maintenance

As the complexity of the report is increased so is the maintenance. Again, I'm not sure if this is a bad thing. Having to handle the consequences of a business process change or a data sourcing change actually becomes easier when it is clear which reports are impacted. Having to modify the control steps of the report serves to highlight any changes that might need to be factored into the report itself.

That said, even if you still consider the above to be cons, the pros leave little room for argument.

Pros

· Clear indication of report integrity

The simplest and clearest benefit is that once I have my report generated, I can have a high level of confidence in its accuracy and completeness.

· Easier analysis and resolution of problems

If there are issues on the report, I already have an indicator of where the problem lies and potentially what needs to be done to resolve it.

· Reports can be more safely generated by non-expert users

Even if you have no business knowledge, you can still safely generate the report because you have a warning that there is a problem and can re-route it to an expert in such cases.

· Report integrity details are not lost when the report is distributed

The person receiving the report is also directly aware as to whether a report is complete and accurate.

Next week the final part of my thoughts on the Art of Control!

Tags: report  control  check  defence  pros  cons  integration  integrity  
 

Jul

05

The Art of Control (part 1 of 3)

Posted by: Francois Koster

As report creation gets ever more flexible, how can we ensure that reports are correct?

Modern reporting systems are pushing the boundaries of technology, making reports easier to produce than ever. These systems rarely take into account the control and compliance requirements of a report. This can expose a company to financial, reputational or regulatory risk. However, even in the traditional reporting model, the control process is often out dated or inadequate and usually is disassociated from the report.

The Problem Zones

7 Problem zones can be identified:

1. Improvements in technology neglect the control requirements.

In a traditional scenario, reports are analyzed, specified, built and verified. Control processes are put in place to verify that all the pre-requisites of the report are met.
With the improvements of technology, many reports can be built on the fly. While this is a good thing, there is usually no consideration in regard to vulnerabilities in the report and little or no control of the reports correctness.

2. The control process is too simplistic

Even if a control process is defined, it rarely does more than check some basic data requirements and some minor quality checks. Commonly this is driven a lack of business knowledge by the developers, a lack of technical knowledge by the business and limited resources in an aggressive timeline.

3. The control process is disassociated

This is, probably, the most common and fundamental problem with control processes for reports. The process is created with the initial report creation and does not remain linked to the report itself. As the report evolves the control process stays static and new or changed elements in the report create vulnerabilities.
An additional problem with disassociation is that the execution of the process occurs independently from the report generation. So even if the process is perfect, you are still exposed to someone executing a report before the control process is completed.

4. Report distribution further disassociates the report from its control process.

The department producing a report may be fully aware of problems within a report. They will often inform the second party of these issues, but will the second party inform the third party?
This is an issue when people require early indication reports as the control process is, by necessity incomplete at the time, but people need to know the rough numbers. Along the way, the information that this is a first cut report is lost and the figures are assumed correct.

5. Too much trust in the reporting system.
In a similar vein, when someone receives a report, they assume that the person before has checked and validated it already. Unfortunately, as trust in a reporting system increases the manual control of the report decreases.
When someone has run the same report for years without issues, they start to take for granted that everything is okay.

6. Over time the knowledge carriers are no longer the report creators.
As a report matures and trust in its completeness and correctness increases, generation of the report is migrated away from the experts to free them for new tasks.
The problem is, though, that these experts provide an additional control step that is often underestimated. Any IT person who has worked with reporting over many is bound to have heard the comment "That can't be right". The familiarity that experts have with their data gives them an immediate feel for whether something is going wrong on a report. Sometimes these are simple signs, like the number of portfolios being wrong or the returns being too low. Sometimes these are very complex judgements in contribution or attribution figures, where familiarity creates an expectation of results and significant deviation causes the expert to be suspicious of the report. Once these people are no longer running there eyes over the report before it is distributed, this simple but valuable control is gone.

7. The small inconsequential change

This is one of the nastiest but frequently occurring problems in real world reporting. Significant technical changes or business process modification will mostly get caught in a test phase. If not, they cause a high impact in the production environment and are immediately addressed.
The small inconsequential change can sneak through test phases and only cause oddities in some reports under certain conditions. The result is that reports are distributed before the error is found.

 

Next week part 2 of 3

Do you have anything to say about this, please share it!

Tags: control  report  problems  correct.  
 

May

30

Koster Engineering at Robust Client Reporting Event

Posted by: Martine Dols

We will be sponsoring, as Koster Engineering, a Marcus Evans event with the title: Implementations of Robust Client Reporting. It's the first time this event is organised and we are excited to join. The event brings people together discussing the latest trends and challenges in client reporting in the financial industry. I am already investigating what people's motivation is to be there. Most of the answers are: exchanging experiences, learning about the latest trends and getting info on software systems for client reporting. GIPS®, UCITS/ KIID, BASEL III and Solvency II are topics to discuss too and their impact on reporting.

We are preparing our presentation with the title: "The Art of Control". Questions to discuss are: How do you know your report is correct? How do you manage the increasing demand for reports, speed and complexity and still make sure your reports show the correct information? It's all about a sustainable control process. I think we hit with this presentation the core of the event subject.

For us it's an interesting event because the audience is of high quality and there are not loads of competitors of ours. We expect to have time to talk to everyone in a relaxed atmosphere. To underline that we sponsor the drinks on Thursday evening. Of course we also want to learn what the challenges and concerns are nowadays in client reporting. This is good input for our product development.

The event will take place from 15th till 17th of June, London Marriott Hotel Marble Arch 134 George Street, London, W1H 5DN United Kingdom. 

Tags: Robust  client reporting  event  London  sponsor  control  GIPS  
 

Apr

01

Reasons for using an external GIPS® reporting software

Posted by: Francois Koster

A number of portfolio management systems are having integrated GIPS® reporting functionality.

Why use an external GIPS® reporting software?

The main arguments are that a portfolio management system is neither a reporting system nor a data warehouse.

Maintaining the history of portfolio properties well (f.e. portfolio number or investment strategy) is rarely possible.

An external system storing data with a time scope is better suited to archive the data

Additional audit trail and control reports are supporting the GIPS® verification in a major way.

Tags: GIPS  arguments  portfolio  investment  
 

Apr

01

Java vs. .NET Substring(..) tip

Posted by: Sergiy Danilchenko

Here is a few notices about String datatype storing and Substring(..) function work difference in .NET and Java I stuck recently.



Short pre-history:

  I re-designed recently some function that realize some transformation over some (potentially large) strings. This function needs optimization in consuming memory and time (especially for case of large source strings). I will not describe much details = > as result new code was some kind of char scanner that is saving result into StringBuilder. On one of stage of this scanning - I need to check if some of pre-defined regular expressions is happened here (starting from current char). Of course, all regular expressions was re-designed to have such form:

  "^(regular_expr)", (^ - means starting point of source string in many regular expression syntax) 

 - to mark that matching regular expression should happen only from starting char (in other case regular expression continue to search entry till end of string = > and when this happens in large string and happens often - it would be very-very long process :-)). And it was unpleasure surprise for me that when I use Regex.Match(string input, int startat) method for some starting char inside string - this starting char was not detected as  FIRST one - i.e. match failed for all starting position >0 (!). Here is small example - just to describe what I mean:

.....................
Regex reg_exp = new Regex("^12");
 string str_source = "0123456789";
  
bool bRes = reg_exp.IsMatch(str_source, 1); //bRes will be FALSE !!
..........................

So, to realize needed for me check quickly - I just call String.Substring(pos) to provide substring to regular expression match :-) => and UPS :-( - the function on large string fall in almost "endless in time" process and always do something with memory :-(.



So, in such way - I found for myself interesting difference in String storing at JAVA and .NET (despite of both objects are immutable in their languages - i.e. constantly defined and cannot be changed):  

- JAVA: can store string as just pointers to sub-sequence in char array for other String object. So, call String.substring(pos) does NOT allocate new char array for substring and does NOT copy content to it;

- .NET: store String as char arrays only and each different String has separate char array storage. So, call String.Substring(pos) does allocate new char sequence for substring and copy content to it.

So, when the substring is using ONLY JUST for analyzing content (becuase of some functionality is not supported for part of string and only for whole string - as it was in my case) => in .NET you will need some other way to solve problem - because calling a lot of Substring(..) function will create a lot of char arrays allocation in heap (and this will be needed then to be collected by garbage collection etc.).



Here are 2 small pieces of code that demonstrate this difference:

JAVA:

 
public  static  void  main(String[] args)
{
   //Allocate large String (10Mb with different chars)
    char [] arr_test = new  char [10000000];
    for  (int  pos=0; poslength; pos++)
      arr_test[pos] = (char )(pos%128);
  String str_test = new  String(arr_test);
 
  String substr = null ;
   int  lcount = 0;

  //Now iterate through large string char positions
   for  (int  pos=0; pos<str_test.length(); pos++)
  {
     //For each CHAR postition take SUBSTRING started from it - i.e. 10 millions substrings :-)
     substr = str_test.substring(pos);
 
     //Do something with this substring to exclude optmization suppressing of substring calculation 
      if  (pos%2==0)
        lcount += (int )substr.charAt(0);
      else 
        lcount -= (int )substr.charAt(0);
  }
 
   //Just print dummy result 
  System.out .print("Result: ");
  System.out .print(lcount);
}
 

 - as you see here is Substring(..)  are calculated 10 millions times for large string (and some small operations are  performed over each substring). This code is working 1-2 seconds on my (not very fast)  laptop. And it's not consumed more memory than needed for source (10 millions char)  string (each substring lead to allocating just new pairs of int values according to visualvm JDK tool).

 


.NET (the same code - just adapted to C#):

public static void main(String[] args)
{
    //Allocate large String (10Mb with different chars)
 
    char[] arr_test = new char[10000000];
    for (int pos=0; pos
       arr_test[pos] = (char)(pos%128);
    String str_test = new String(arr_test);
 
    String substr = null;
     int lcount = 0;
   
    //Now iterate through large string char positions
 
     for (int pos=0; pos<str_test.Length; pos++)
    {
         //For each CHAR postition take SUBSTRING started from it - i.e. 10 millions substrings :-)
 
        substr = str_test.Substring(pos);
   
       //Do something with this substring to exclude optmization suppressing of substring calculation
 
        if (pos%2==0)
         lcount += (int)substr[0];
        else
         lcount -= (int)substr[0];
  }
   
  //Just print dummy result
 
  System.Diagnostics.Debug.Write("Result: ");
  System.Diagnostics.Debug.WriteLine(lcount);
}
 

- the same code as in Java - but it will not be finished even in 2 hours :-) (and always allocating memory and copying data - so processor was quite busy :-().

Tags: No tags defined!
 

Mar

07

Where is the GIPS® logo?

Posted by: Martine Dols

I noticed that I never saw the GIPS® logo on the websites / homepages of certified firms. After some digging I found out that it's not allowed according to the Advertising Guidelines (PDF) from the GIPS committee.

This is a missed opportunity.

GIPS should consider their own branding and be more flexible in their guidelines. Now you can only use it in a GIPS compliant presentation or on an advertisement. With the advertisement you may see an opening but besides the logo this has to contain all kind of technical composite info. That doesn't fit in a company homepage. Companies that are GIPS certified did a tremendous effort to become compliant but cannot make the utmost of it in their marketing. The logo will be hidden and only found by people that know what to ask for.

I posted this on our linkedin group forum and got some feedback.

David Flint wrote that he thinks GIPS should be more flexible, already years ago when there was a demand for it. He sees a trend in the UK that firms no longer see GIPS as a marketing tool. Mark Goodey and David Spaulding agree that being compliant is no longer a differentiator but a qualifier in the business.

Tags: gips  logo  guidelines  certified  
 

Mar

07

The tale of our new website

Posted by: Johan Korteland

As you can see we have a new website at Koster Engineering.

The layout and set up have changed a lot. The reason why it had to be renewed is that we defined our marketing strategy and chose to use social media intensively. Plenty of definitions can be found on what social media is. Wikipedia says: "Using highly accessible and scalable communication techniques. Social media is the use of web-based and mobile technologies to turn communication into interactive dialogue. "

As the graphic designer at Koster Engineering, I got the task to translate this into a new website design. Key words for me were #clear, #plain, #accessible and #interactive. I wanted to get away from a design with a lot of visuals (sailboats in our case) that distract from the real message.

However, we did not abandon the whole maritime theme from the corporate layout. You will still find it 'floating' around through the corporate identity and the brochures.

In regards to the use of social media you will find the Company Blog and a Twitter stream (on the front page). In the near future various other functions will see the light of day on the Koster Engineering website. It is still a work in progress and as a designer I'll never say that a design or project is great and finished. As a designer you'll always have to see room for improvement. Most designs are good at best, but never finished. Same goes for this website.

That is the tale of our new website. More will come soon.

While you are here, please leave some feedback on what you think of our new website.

Tags: clear  plain  accessible  interactive