Skip to main content

My first impression of the Palo Worksheet Server 3

Recently I got a change to download Palo Worksheet Server 3, I was planning to build a small test case and impress some people at work with what Palo can do. But I'm not sure if I should really show too much of the Worksheet Server during my presentation after I started playing with it a bit more. Here is why:

  • Pros:
    • The frontend of Worksheet Server looks nice.
    • The Charts and Micro Charts look nice.
    • Many people won't think of Palo as a full BI tool if it doesn't provide it's own frontend
  • Cons:
    • Why did Jedox release a software that is so unstable? I'm always worried to do too much in a worksheet because I don't know whats gonna happen next. Will i get lots of "value?" Will my session end and I loose my data? (Just happened)
    • If you are used to Excel & Google Docs you get impatient, some of the context menus are too slow.
After working with it for half a day I wouldn't let a client create his own reports, the frustration level would probably be too high. Am I alone with that opinion?


Anonymous said…
No, you are not ... I like the ETL-Server which suits very well for palo, but I faced lots of problems using the worksheet-server ... quite buggy. Hope you felling well in Canada! the best, Lars
George said…
Hi. Any updates on this topic?

Is a copy of your thesis available to read?
Ben said…
I haven't had a close look at Palo lately but I'm pretty sure they improved by now. The latest version on their website is Palo Web 3.1 from December 2009 (

I wish I could publish my thesis but the company I worked for didn't want it to be published because of customer data in it. I might be able to help you if you have specific questions.
George said…
It's work, but customer data can usually be de-identified.

I'm at an early stage of comparative research, so currently no questions. Thanks though!

Popular posts from this blog

Pentaho Data Integration - Multi-part Form submission with file upload using the User Defined Java Class Step

I recently needed to use Pentaho Data Integration (PDI) to send a file to a server for processing using HTTP Post. I spent several hours trying to use the existing steps HTTP Post, HTTP Client & Rest Client but I couldn't get it to work. After some more research I came across the issue PDI-10120 - Support for Multi-part Form Submittal In Web Service Steps  and I thought I was out of luck. I previously wrote a small Java client for a similar use case and remembered the PDI has a step called User Defined Java Class  (UDJC). After reading this great tutorial I created the following basic transaction. I have a dataset with the URL and the full file path and use the UDJC to make the HTTP call. HTTP Post using User Defined Java Class The Java class handles the actual HTTP Post. It uses 2 input variables, the URL (url) which is used for the call and the file name (longFileName). The HTTP call then contains the file (line 30) and the file name (line 31). I included some basi

Products you don't expect to be 'Made in China' - Del Monte fruit cups

Since I moved to Canada back in March I have started to realize how many products are actually made in China. Back in Germany you could also buy lots of stuff from China but you mostly had the choice between German or Europe products and Chinese products. When I went to Food Basics in Oakville a couple weeks ago to get some apples I stood in front of a huge tray of Chinese apples! Aren't there enough apples in Ontario, Canada or the US? Even Mexico would probably be closer than China. Another day my wife bought Del Monte fruit cups in the grocery store. I checked the label when I was going to eat it and i decided to leave it in the fridge. First of all it is 'Made in China' (again I guess no other country in this world has fruit) and second it contains artificial flavor. How bad must the fruit inside be that you need artificial flavor (and does anybody in China controls how it is made)? For my part I'll check the labels more closely whenever I buy any kind of product

Open Source tool for Data cleansing and Master Data Management

Last weekend SQL Power released an improved version of SQL Power DQguru (formerly known as SQL Power MatchMaker), one of the few open source tools for data cleansing and master data management (MDM) available. Version 0.96 brings a new feature that allows you to run SQL Power DQguru from command line. This allows you to integrate it into batch scripts and your ETL jobs. As a BI consultant for SQL Power I have used SQL Power DQguru in different projects and it has made my job a lot easier. Some of the features I like the most are: Easy connection to any database with JDBC drivers, incl. SQL Server, Oracle, MySQL, Postgres Lets you create complex merge rules so your dependent data will always be updated when you merge records. You can combine over 25 steps to find possible duplicate data with a match rule, for example: Word Count Regular Expressions Substrings Retain certain characters Translate Words, you can create your own translation rules. You can preview how your data