Category Archives: Open Source

This Graph is VERY special!

What’s so special about this graph? Well, other than the fact that it’s a really cool way to translate an XML document into a visual HTML report (this isn’t an image, it’s actually an HTML document) the technologies used to produce it are REALLY compelling. Chris Harrington and I spent a couple of hours this morning, working to generate a few reports using both of our “technology stacks” and with just a few small adjustments we were able to do something that spells big potential.

Let me first describe exactly how this chart was built:

  • An XMLA request was passed OVER THE INTERNET (from Pittsburgh to Seattle) from the ThinOLAP command line XMLA client to a Mondrian instance I have running in my lab.
  • The XMLA request was parsed and passed to the Mondrian ROLAP server that issued JDBC calls to my BizGres database.
  • My BizGres database executed a handful of SQL statements that Mondrian used to generate the actual data values for the MDX query passed from across the country.
  • The XMLA response was sent back OVER THE INTERNET (from Seattle to Pittsburgh) to the ThinOLAP command line.
  • The ThinOLAP then did a post processing XSLT transformation to take the boring XMLA results (ie, pure data) and turn it into a really compelling chart.

Why is this exciting?

First off, because it’s using XML for Analysis which was supposed to be a hot ticket item, but has less than stellar support from vendors (including the 8000 lb guerrilla that “supports” the standard). There are a handful of XMLA clients, almost ALL of which are tied up in “dot net this” and “dot net that” and are clearly written to work with SQL Server 2005 Analysis Services. Mondrian happens to be the only other PUBLIC/PRODUCTION OLAP server that implements the interface (if the other vendors are actually public with their XMLA providers please let me know on email). We proved that the standard is pretty sound and that with small amounts of tweaking interoperation is pretty straightforward! UPDATE: I’ve just read that HYPERION has a provider, but I’m uncertain of how useful it is in practice. It also appears that SAS has a provider but again I’m not sure if there any clients that aren’t fused to the existing MS technologies (OLEDB etc).

Secondly, because it’s running over the Internet at pretty respectable speeds. Between two remote offices the REQUEST/RESPONSE was reasonable (a couple of seconds). This is one of the first times I’ve actually received some benefit from a SOA perspective over the Internet. Chris and I figured it was HTTP, there was no need shipping software back and forth we just did some port forwarding and it all worked BRILLIANTLY. We proved that XMLA can work, reasonably, over the public internet.

Thirdly, the server side is 100% OSI Certified software. That’s right… my provider was serving up OLAP goodies using Linux, Tomcat, Mondrian (part of the Pentaho platform), and BizGres. There were ZERO proprietary applications running to build and deliver the XMLA responses. The XMLA client (ThinOLAP) is an freeware product that uses from Microsoft packages for XML processing, scripting, transformations, etc. We proved that one can build an open source OLAP solution that can be leveraged by whizbang clients.

Beyond open source, Mondrian is a JDBC ROLAP server so it can be plugged into just about any database that supports JDBC. Choice is GOOD! Oracle anyone? The potential of actual interoperating XMLA clients is HUGE! Think about how many boutique visualization, charting, and reporting engines have been fused to OLE DB and other proprietary protocols. Perhaps a handful interoperating clients/servers will entice the others to write to the actual standard?

Chris and I continue to work at seeing what potential exists for interoperating his Intrasight Product with Mondrian. If you’re interested in using Mondrian or a nice AJAX client for Mondrian please don’t hesitate to be in touch with either of us in this regard.

Ingres GPL: I'm not alone!

Well, apparently I must not be alone in my disapproval of Ingres choosing GPL. I just noticed the following notice on the Ingres Licensing page:

Important Notice on GPL Licensing
Posted February 12, 2006

We have had significant feedback from many of you who would like to bundle access or include the Ingres access technologies (JDBC, ODBC, .Net, OpenAPI, ESQL etc.) with your non-GPL products, or link with these technologies, but are concerned about the viral aspects of the GPL.

We understand this important customer/partner requirement and are working to make this possible without limiting the needs of GPL partners and customers, or for end customers to get support.

I value this feedback from the Ingres community and the level of interest in using Ingres as an embedded or bundled database in applications as diverse as appliances, printers, and ERP solutions. Please check back in the next few days for an update on this topic.

To receive email updates on this topic, please email info@ingres.com with the subject GPL.
Dev Mukherjee
Chief Marketing Officer
Senior Vice President of Business Development

Hopefully this will be one of those cases where the community gives tough love, and it turns out in the best long term interest of the project. We’ll see!

CTO on GPL

I met with the CTO of a company leveraging open source projects to build a compelling offering in the BI/DW space. Their secret sauce is staying closed source, but they are contributing some good stuff back to the core projects which they are leveraging.

We were discussing the dual licensing GPL/Commercial strategy and he put it quite nicely:

For business focused projects that have little or no value to hobbyist projects, it’s almost antagonistic offering GPL

This echoes my thoughts on the Ingres move to GPL. Business focused open source projects know that they have to exist in an economy of open and closed source applications so offering a GPL version of their project could be considered insulting! In practice, it becomes shareware (try for free, but use for money). For instance, Ingres corporation is saying that you are free to use our database if your product is GPL. This is like offering a scholarship to an underprivileged inner city youth for a PhD program in Astro Physics when they only just passed their GED. They can claim they are being benevolant but they know that student will never get accepted to the program.

My advice to “business focused” open source projects: If you want to flourish, you must have a business friendly license otherwise your community will not arrive. The hobbyists don’t need advanced business workflow processing, and companies that could invest in your technology won’t because they need business friendly licenses.

MySQL works because it actually benefits the community because “Joe’s MP3 collection manager” does need a simple database for it. MySQL is on the “cusp” of a business application. There is a reasonable value add for MySQL in GPL and vice versa.

I’m continually amazed at how the words “open source” and the actual practice of “open source” are getting twisted, turned, mauled, and misused to make a buck. It’s entirely possible for someone to choose GPL and not build a great open source project… Community (developers, companies, users) is the secret sauce so your license, posture, philosophy must engage your potential or existing community with the idea that getting some invested in your project (ie, hard working hours of coding, testing, and leveraging your project) will far surpass any tricky antics to force people to pay you some money.

Ingres screws up their open source license, again!

IMPORTANT: Some people have commented/emailed that I’m off my rocker by asserting using a GPL database triggers the viral nature of GPL. I should make the VERY CLEAR distinction that the reason it triggers the GPL (I believe) is because the CLIENT ACCESS LIBRARIES are GPL licensed (the client code is embedded into the CLIENT applications). I don’t think it really matters what the server in a different process is licensed as, it’s the JDBC drivers that cause GPL to kick in. Refer to the MySQL interpretation of this (LGPL to GPL)

UPDATE: I’ve been told that the announcements on GPL may have been incomplete. There is possibilty that interface code may be LGPL to allow for use in the ISV/OEM apps without GPL restrictions. I’ll update this if/when there is an actual announcement on LGPL stuff.

I was quite optimistic when Ingres was divested from CA. Here was an opportunity for Ingres, a mature feature rich database, to make a real dent in the world. Instead, Ingres (the codebase) has made the same mistake AGAIN by choosing a license which will ensure it never becomes a “real” open source community, project or sustainable business model.

CA made the mistake of picking a funny license when they open sourced Ingres: CA-TOSL was an off color license that many people passed up even considering Ingres. CA squandered an opportunity to create a real open source project, which is just disheartening. There is SO MUCH in Ingres that with “two cups and two tablespoons of community” Ingres could have been the leading open source database. Instead, they came up with a “pro-CA” but relatively business friendly license that meant Ingres, it’s source repository, etc remained decidely in the fortress of CA.

Ingres was divested to a lean, mean Ingres company with the sole purpose to making Ingres the leading “business open source.” This was truly exciting, and the potential of what Ingres could become was palpable. Parallel query execution, intelligent join operations, replication, distribution, table partitioning. A robust feature rich RDBMS primed for the ISV/OEM and early adopters needing an open source RDBMS worthy of their applications. Forget the GUIs, marketing materials, and blackbox functionality: Ingres was primed to be the tool of choice for companies willing to overcome some of the rough edges to power their platforms. It could have been the “Kleenex” of FOSS DB. The “Hobart” of FOSS DB. The “Phillips Screwdriver” of FOSS DB.

Terry, I’m sorry to say, you must have received some bad advice. Actually, let me clear here: REALLY BAD ADVICE! The GPL is the nail in the coffin that will make the long drawn out history of Ingres final. It will fade to obscurity because NO ONE will touch it with GPL. Let me explain and I hope for your investors and customers that I am wrong!

The days of being an RDBMS for general purpose are done!
You are going to be used as a component in an application or service. If people are purchasing their DB first and they plan to build extensively on a database they are choosing commercial vendors. Quite simply put, if someone wants to build their application and the database is the most important piece of that puzzle (think PL/SQL, triggers, etc) then it’s not an open source database. ISV and OEMs are the ONLY ONES that are going to build sustainable economic conditions for an open source database. Your company identifies this in the “Three Degrees of Seperation” whitepaper posted on your site.

GPL means that companies building applications for BUSINESS won’t use you!
Why do you think that most BUSINESS ORIENTED applications considering an open source database have chosen Postgres? The GPL is quite frankly, really really scary to companies and so NO ONE who is building any sort of “Pro” edition, building applications with significant amounts of their own capital, or using other business friendly open source license (which is one of the valid open source business models) will use it. Quite simply put, the GPL ensures that Ingres will not be used by companies building business applications.

Partners/ISVs/OEMs willing to purchase your license will choose SOMEONE ELSE!
Knowing that your GPL option is entirely unsuitable for embedding into an application why would anyone choose to PAY the licensing for Ingres when embedded proprietary licenses are cheap? ie, Knowing that customers aren’t going to get the benefit of an open source database it may as well be a black box that an application runs on and that’s it. Why choose to pay for a product that is technically behind Oracle when the “embedded” commercial version terms are comparable? If it costs the same amount, both are proprietary licenses, why would someone selling a business application forego the “powered by Oracle” to choose a proprietary Ingres license?

Zero chance at “community”
Ingres (the project, not the company) had failed to build any sort of community in more than a year. By ensuring that the entire Ingres codebase remains copyright Ingres Corporation you pretty much ensure that any features people may build into Ingres won’t see the light of day. This is the secret sauce of open source. Inventors inventing, leveraging the ideas and efforts of people all over the world. Well then, what about the GPL projects of the world that will use Ingres and contribute their time testing, qa’ing, bug fixing, etc? See next bullet.

The GPL “database of choice” is already decided
Every GPL project uses MySQL (not every, but close). Community, applications, knowledge, support, training, brand awareness. Done, finished. GPL projects needing a DB don’t really think about it. It is the de facto standard for GPL projects. Why would anyone writing Joe’s Open Source Scheduling and Management Toolkit choose Ingres that has NO community outside of Ingres corp and will NOT develop one (see previous bullet). Anyone needing your “enterprise features” above and beyond what MySQL provides will fit the category mentioned above (ISV/OEM) and find your license unsuitable.

Quite simply put, the CA-TOSL was bad for Ingres but the GPL is just horrendous (not necessarily because GPL is, just GPL for Ingres). Anyone who needs your features won’t use your commerical license (think integrated health care application X) when they can use other proprietary RDBMS and open source projects that COULD use Ingres won’t because they don’t need the features and MySQL is 10x better from a community/FOSS perspective.

I’m sad actually, I was quite excited about Ingres. RIP, Ingres. I hope the death comes swiftly and customers find little pain in their migration to other sustainable RDBMS projects/products.

Great post on Commercial Open Source

Nanoblog (which has nothing to do with Nano technology) has good advice for Commercial Open Source Companies. I know the Nanoblog author and he is working in an IT organization that is trying to make OSS work in their environment so he’s formed these opinions working with multiple OSS companies.

Some comments:

“Let go of the control on product vision.”
Product vision is what helps COSC make money… If there is zero revenue in feature X then the company shouldn’t fund that. Now, flip side, the company should not stand in the way if someone who cares nothing about revenue wants to build feature X themselves. There is nothing evil with COSC prioritizing their contribution to OSS according to their revenue as long as they aren’t dictating standards/interfaces/etc that exclude the collective community will.

“Play well with other open source products.”
This is, easier said than done. It’s tough enough to assemble products with expertise on call (companies) and thorough and complete documentation and integration “road maps.” However, the “integrator” in open source almost always bears the burden on both sides of the projects… Someone wanting to play nicely with other OSS projects would have to become quite knowledgable about t’other to be successful int hat integration.

“The community is your number one customer”
I think this makes a good sound bite, but just can not be true for a company. Right or wrong, a companies purpose it to generate profit. It’s not a company (it’s a trade org, or co op, or association, or non profit) if it serves another interest (perfectly valid right? Mozilla Foundation, yes?).

I think Nanoblog omits some of the important, mutually beneficial aspects of COSC. Open Source “projects” at their core, are interesting and useful to engineers as they put together solutions. However, it is the application or solution that is of actual value to society, people, and business. ie, the Mozilla XUL package is valuable to the engineer, but it’s value to the world is actually as Firefox. ie, TCP/IP is cool to the engineer but email changes the world. Thinking of money as a currency of reward for value, people/societies/biz aren’t willing to pay for pure engineering genius. Unless of course, it’s an act of charity. They ARE willing to pay for something that makes their life better, easier, less expensive or provides an emotional experience ($20 toilet designer brush at Target).

The commercialization of open source provides the actual benefit of the inventors work to society… it validates the inventors passion with mass adoption or value; I can think of nothing more gratifying to the inventors spirit than to “change the world” for billions of people. This actual benefit is what people are willing to provide contribute back to the company/community. This, in theory, should provide the needed capital for the inventor to continue the pursuit of passion to provide the world his/her talent. This is not always the case; some productizations of open source don’t necessarily reward the actual inventions for their worth. How much profit should the person bringing technology X to the market reap, when they did nothing more than “apply it” instead of “invent it.”

Herein lies the paradox of Commercial Open Source that is the most interesting “tech” debate of the 21st century. Commercialization of open source projects brings sustaining capital as reward to these projects (in terms of comitters, paid OSS developers, testing, QA, conferences, etc). However, commercial direction in Open Source is considered in opposition to the freedoms of OSS and the inventors spirit. Ie, commercial application of OSS (whether for internal engineers or commercial open source applications) pay the bills of the inventors while they invent. This has always been the case (ie, Bell Labs was able to function b/c of huge profits by the commercialization of technology at AT&T).

The tsunami of open source “VALUE” is just beginning. Microsoft didn’t invent the mouse, they made it valuable to you and me. etc. I think it’s very much “black and white with 1000 shades of grey” on how companies are balancing this. Some do well, others do poorly. Time will tell what makes up the real secret sauce of “commercial open source.”

Pentaho Visit : Day 4

Today was a bit of a whirlwind…. Had a bunch of constructive conversations with the Pentaho folks about their solution, their license, where their company is headed, etc. I’m most impressed with their company, and engineering staff. Their product is coming together, they’re getting used by real customers (big ones with tough problems). They haven’t announced names on many of these, but they’ll be recognizable brands and companies.

I just started having a look at Kettle this week… I’m quite impressed with it, actually. I won’t be able to go into detail on the matter right now so I’ll defer that to some later posts. Suffice to say, that I was able to get a simple ETL transformation squared away in about 10 minutes. For anything open source, this is impressive so Kudos to Matt Casters at Kettle! 🙂

Speaking of full posts, I had some interesting discussions with the Pentaho folks about their license (PPL 1.0). I want to post a more comprehensive post on the matter so I’ll defer on that as well.

We covered some of the portal integration today, and WOWSERs. Building portals in open souce tools right now is not very, ummmm, user friendly. This has nothing much to do with Pentaho per se, as they are providing pretty cool portlet reference implementations, etc. It doesn’t appear that there is any good open source portal development tools (visual development anyone?) at the mo’. This might be so because I don’t see it, so if I’m wrong please do let me know. btw, this is one area where I’d LOVE to actually BE wrong. Email me if you know of a good open source portal product.

We also looked at some of the UI customization provided. There’s some cool stuff in this… It can be used to generate say, lists of values on pages. This is necessary for creating these dynamic dashboards.

From the screenshot you can see some radio buttons and a select box that was built using the Pentaho provided “Action Sequences.” Pretty cool stuff all in all.

I had to leave a bit early (there are some people staying on till tommorrow for a bit more) but thoroughly enjoyed my time with the Pentaho team. James, Bill, Lance, Richard, and the rest of the tribe: many thanks for the hospitality and all the great information on your evolving product!

Pentaho Visit : Day 3

This is the first time that Pentaho has engaged partners and customers directly in this sort of classroom training. Needless to say today was a bit of a challenge in terms of getting the examples to work properly, make tweaks, do some very rough exercises to get to know the platform a bit better. I’ve met with many engineering teams across organizations, nationalities, geographies, and industries and suffice to say many have lots of issues and politics going way back. It is always nice to see a team that gets along, has a sense for that right amount of “right” versus “ship it” mentality and I think Pentaho has that balance. Their engineers have helped move Open Source BI along further than I would have thought six months ago…

On that note… the product is still quite technical. They have a workbench (refer to yesterdays post) that provides just one layer of abstraction on the XML document solution. It’s an earnest effort, but falls short of any product based “BI Solution Builder” that I’ve seen. Time of course… Time, and money. Like any product still in it’s maturing phase it will improve…Version 1.0 shares the vision, builds the evangelists in the early adoptors. Version 2.0 is the robust product, right? (Did anyone actually find Windows version 1.0 intuitive and mature)?

We created our own “Action Sequence” today… These are the composite pieces of a reporting solution (run a query, iterate and print reports, and email). While the engine appears to execute the XML based documents properly and well, building these files using the workbench took some efforts. Many in the class found it easier to pop behind the scenes and just edit the documents directly. That being said, the power behind the architecture selected is compelling. With an XML solution of about 50 lines we were able to build define a solution that:

  1. Received a parmeter request (customer_id) from JBoss
  2. Queried a web service to determine which business unit that account is managed by
  3. Build a query based on the specifics of the business unit (Canada needs this query, US needs this one)
  4. Executed that query
  5. Built a PDF report based on the results using a JFreeReport template
  6. Delivered it back to the browser

The core of their product is pretty robust, and I think they have significant competitve advantage (not even just Open Source) with their report delivery intelligence. Pretty cool stuff… Does anyone know of any other BI product with intelligent workflow report processing (flows, cases, loops, web services, etc)?

We covered scheduling components. This is a feature need in the platform, but I wasn’t all that jazzed about it. It uses Quartz, and follows a pretty basic schedule, restart, pause, etc. I think it’s value add to the project, but not exactly anything I get excited about.

We also discussed OLAP, Mondrian integration, and what that looks like under the Pentaho umbrella. I wasn’t expecting much, because Pentaho is clearly focused on delivering reporting features. There really isn’t much in there in this regard, apart from the already good project work to date by the Mondrian/JPivot projects. There were some simple integrations into the platform, but it really isn’t that different from what you get if used these seperately. Most of the other pieces of the platform there is a clear roadmap (we plan on building feature X in Q2), however the OLAP/Crosstab pieces there was some uncertainty other than “we know it’s not great, but we’ll continue to improve.” I’m hopeful that I might be able to make a contribution here, but I don’t exactly know how.

Tommorrow we get into the star schema, and ETL side of the house…

Pentaho Visit : Day 2

Heading this morning from the hotel to the Pentaho was bright, sunny, and a beautiful day in Orlando. News from Seattle was 23 days straight of rain, but that doesn’t dampen my desire to head home the end fo the week. I really love Seattle and always look forward to returning.

That being said, I rather enjoyed today working with the folks at Pentaho. Today we got into some of the details of their solution, and much of the material and documentation started to make more sense based on their plain english explanations of what pieces of the platform fit where.

At a higher level, nearly everything in the Pentaho platform executes as an “Action Sequence.” This has some significant architectural benefits that we won’t belabor here, but suffice to say that this allows for great flexibiliy in deployment options. Actually, the three products below all interacted with these Action Sequences using a different “application” method (eclipse plugin, standalone java app, and a JBoss deployed web app) all drawing from the same core libraries. At a fundamental level, the Pentaho server is metadata driven (not in database metadata) in that the Pentaho base simply implements solutions defined by a variety of XML Documents. Nothing new here (this is common I’d say, anyone feel differently?) but a good choice all the same.

What is an Action Sequence? A sequenced and paremeter driven set of Action Definitions (ie, run report, genenerate PDF, email to Joe)

What is an Action Definition? A configured instance of an Action Component (ie, EMAIL=Joe, SUBJECT=Aug 2006 Sales Report, component “Email Sender”)

What is an Action Component? An implementation of an activity in the system (ie, an Email Sender component, PDF Generation Component, etc).

The AS is the driving class (for x in resultSet, do AD1, AD2, AD3). The AD is the specific instance of a call to a component (AD1 with Email=Joe, smtpserver=mail.bayontechnologies.com, etc). The AC is the Java class that implements the interface for components (public class EmailSenderComponent implements WhateverPentahoInterfaceItActuallyIs).

OK… now that the basics are out there, let’s talk specifics. Today we got to dig into three major pieces of the system:

Report Server
This is the piece that runs on a server somewhere, that executes Pentaho solutions. It has a web interface for interacting with the Pentaho server, it has a runtime repository for running these solutions, it is able to schedule Action Sequences to be run. This is similar to Crystal Enterprise Server that receives the reports and schedules, runs, distributes them. The demo installation of this is wicked easy on Windows, as I alluded to yesterday. The installation on Linux did require some coaxing, as do many things in Linux. Clearly, the out of the box implementation works best with Windows and Linux requires some effort so I’ll ding Pentaho for that. However, I can’t really fault them because 90% of evaluators will be giving them their 10 minutes of eval time with Windows boxes so it’s a good decision for the project/company.

Some screenshots of the working server, with a handful of the reports that come with the installation:

Notice the Parameter Driven selections. We’re getting into those tommorrow, but it looks promising.

The Crosstab JFreeReport is quite limited.

There is some cool dashboard stuff too… We’ll be getting into that later along.

Pentaho Report Wizard
This is a standalone Java application that makes the basic sequence of getting a basic report “running” pretty quick. It’s in its infancy; in fact, I don’t believe that it has been released to sourceforge.net yet but I might be mistaken on that. Pentaho is planning to release this on sourceforge in a few days.

Step 1: Define your data location and your query.

Notice the lack of a query builder which will deter some from even considering this a useful wizard. Pentaho acknowledged they need that peice in here and will work at improving the wizard incrementally. However, there’s little you can’t copy and paste into the SQL area so it makes Step 1 quite powerful in getting at your data. In Oracle for example, consider the power of ‘SELECT MONTH, VALUE, GROUP, LAG (VALUE parition by GROUP order by MONTH) PRIOR_PERIOD from MY_VALUES_TABLE’. Good for the SQL Gurus.

Step 2: Select which columns are your items, which are to be “grouped” in the report.

Nothing really special here. At this point I notice that nothing in this application actually requires a fat Java Client. It’s all check boxes, select boxes, arrow buttons etc similar to some of the JSF ADF Faces components that Oracle puts out. This is a prime candidate for a community built AJAX wizard! Any takers on that one?

Step 3: Report Format Options
Missed that screenshot, oops.
This is where you can set page breaks at group boundaries etc. There is also some formatting options here with background color, justification, etc.

Step 4: Formatting Options.

Now, this is not really that advanced… It provides the “formatting options” for the wizard which doesn’t actually use a template. I believe Mike on the Pentaho team coined it the “Non-Template Template.” Basically, because you’re using a wizard you give it basic formatting things (group heading font, color, etc) and it will generate a template for you. You want more than that you gotta build your own. Incidentally, underneath the covers this wizard is building a JFreeReport defintion. Pentaho can build reports using JFreeReport, BIRT, and Jasper, however I think JFreeReport is what the team is using for the wizard.

Step 5: Preview

The options are generated by JFreeReport. Excel, PDF, and HTML. The PDF on linux blew up for me. The HTML worked all right. Didn’t try the Excel. I’ll wait for Open Document (just kidding). You’ll see, the JFreeReport actually generates a pretty decent report. Clearly this is not the “pixel perfect” solutions some commerical offerings have, but it’s really not bad.

Step 6: NICK’s BONUS STEP

You also need to save the report. 🙂 The natural last step of the process is to save the report (four files that constitute the report definition) to be used either the REPORT SERVER (above) or the workbench (below).

Pentaho Workbench:

The workbench is an Eclipse plugin that edits Pentaho solutions. Since their solutions are, for all intensive purposes, plain XML documents this is a good fit. My initial impressions are, well, just OK. Clearly this beats writing XML by hand based on their spec but it’s a pretty rough GUI when it comes to making sense out of the whole thing. This GUI will be efficient for those that know the underlying XML structures, the specifics of their Pentaho Action (Components/Sequences/Descriptors). However, for the person coming onto the platform it will be kind of daunting. Again, it should improve but for now it’s not exactly user friendly.

We used the workbench to take that report we created in the wizard and “parameterized” it. We made the SQL driven by one of our where clauses and bound it to a “request.PARAM1” item that Pentaho will set in it’s server environment. It was a little difficult understanding the context of what we did, but I have to say that when we copied it up to the server it worked brilliantly.

More tommorrow… Email me at the usual (on the right column)… I’ll also offer to bring any questions, especially those of Oracle community, to see if I can’t get those answered.