Quantcast
Channel: KNIME RSS
Viewing all 4157 articles
Browse latest View live

Cdata Error : Could not create connection to database

$
0
0

I am using Json Input node and posting input parameters as json, The problem is when i connect table column to variable node that have a variable from the json input with database connector i get the error 

"Execute failed: Could not create connection to database: cdata.jdbc.googlebigquery.GoogleBigQueryDriver"

But when i not post any json then it execute successfully with default values that is set in the Json Input node. 

Attached: workflow snapshot 

 

Please help to find what is going wrong ?


How we can post a variable to workflow for use in database reader node ?

$
0
0

Is there another way to post variable to knime workflow by REST api instead of use Json Input node ???

Webportal Layouting

$
0
0

I am building a workflow to be accessed from the Webportal and I would like to have two quickform components on the same line on the webportal. It should be doable according the Webportal user guide page 11.

However, I am fighting with the JSON code in the Layout Wrapped Node Wizard with not too much success so far.

Would it be possible to have a code sample showing how to do it correctly ?

Thanks in advance for the help.

Best Regards,
Eric.

Integer to date

$
0
0

Hi Knimers

Simple doubt...How can I transform my integer column (YYYYMMDD)  to a date format?

Ps: I transformed to a String and tried to use String to Date but it's not working.

Tks!

RapidMiner 7.2 integration is available in Community contributions nightly build

$
0
0

Hello,

Those who want to try an another tool for data mining -similar to KNIME, but with limitations/different features-, they are welcome to give the updated RapidMiner integration for KNIME a chance and tell where they would like to be improved. The update site for the nightlies are available here: http://update.knime.org/community-contributions/trunk (please do not follow in case it becomes a link, just add it as an update site).

There were some problems with the previous versions (stable for KNIME 3.2 is based on 5.3.15)*, so there were significant changes since. I would recommend using a separate KNIME installation for testing as it will be hard to go back to the stable version if you are already using it. (I have tested with older workflows and they seem to work with 7.2 too, but cannot be sure because of multiple major version changes.) The 7.2 version is still lacking some versions that were available in 5.3 (mostly data input), so even free version of RapidMiner users probably find this useful (KNIME supports most of them, some of them even better). :)

If you could share your experience, what would you like to change, which RapidMiner extensions** you would like to get better supported feel free to leave a comment in this topic, or send an email to info@mind-era.com.

Cheers, gabor

*: From 5.3.15 till 6.5 it was closed source, 7.0 and 7.1 were required some changes which I failed to adjust the continuous integration. So basically only 6.5, 7.0 and 7.1 were not available for KNIME.

**: You should install them to your desktop RapidMiner Studio 7.2 and those should be visible in the integration too.

Thought this would be easy

$
0
0

I have ten columns of data, and want to predict one of them using the others.  First tried to point to my data, starting with the Example for Learning a Neural Network, but got error ": no entry data.bin in file" when I used the Table Reader.

 

When I changed to a File Reader, now get "Column 'Col36' for stratified sampling does not exist", but my data only has ten columns.  I see that the Normalizer is only showing six of my data columns.

 

But it looks easy to build my own, so found and dragged the nodes, and built my own workspace very similar to the example (however for example I am using File reader instead of Table Reader), but now I get "only double columns for input'  error in the RProp MLP Learner.

 

Looking in the configurations for the nodes, I see:

Stratefied Sampling dropdown in Partitioning node only shows four of my columns.

 

Maybe there is a different example I should be starting from, or I need to deal with these errors ?

 

 

 

 

R to R node: Source of error for "for R evaluation failed"

$
0
0

Dear KNIME R community,

I am new to KNIME and am currently trying the R scripting nodes to build a model building and prediction workflow for soil IR spectroscopy. The current workflow consists of R nodes that pass along workspaces, as the spectral data are stored in R object lists together with other metadata and chemical informations. Therefore, I mostly use "R To R", "R Source (Workspace)", and "Add Table To R", because I need to access different R objects from the workspace.

The error message I get at the "R to R" node is the following:
ERROR R To R               0:36       Execute failed: R evaluation failed.

The "R to R node" that fails to evaluate uses functions from a personal R package with customized functions for soil spectroscopy. The code is shown below. However, when I try to evaluate the code in an R studio session everything works properly, and I get the desired output list. Also, when I evaluate the error node via "Configure..."> "Eval selection" I don't get any errors, meaning that all required R objects are available within the KNIME workspace.

# Perform pls modeling ------------------------------------------

# Total C
pls_C <- pls_ken_stone(
  spec_chem = soil_chemspec,
  ratio_val = 1/3,
  pc = 2,
  validation = TRUE,
  variable = C
)

I have installed Cairo and Rserve packages in R (version 3.3.1), and Rserve integration seems to work as the previous nodes can be executed properly. Currently, I'm using KNIME 3.1.2 with OS X 10.11.3.

From the error log file in KNIME I get only some hints, but I am not able to figure out why the error occurs.

I suspect that it might be a problem related to memory (maybe Rserve...), because the error log states
"'memory.limit()' is Windows-specific" . The R object generated within the node is a huge list of about 32 Mb, that contains soil chemical and spectral data, model output from the caret package, and a ggplot list object.

Note that the error occurs in node 36. I have increased buffer limit size to 1000MB for Rserve (configuration via KNIME setttings).

Has anybody of you have encountered similar errors? Thank you very much in advance for some possible hints why the error occurs and possible ways to fix it. Below is the error log.

Cheers, Philipp

2016-08-12 10:34:14,172 : DEBUG : main : NodeContainerEditPart :  :  : Add Table To R 0:35 (EXECUTED)
2016-08-12 10:34:14,172 : DEBUG : main : NodeContainerEditPart :  :  : R To R 0:36 (CONFIGURED)
2016-08-12 10:34:16,626 : DEBUG : R Error Reader : RBinUtil :  :  : External Rscript process error output: Warnmeldung:'memory.limit()' is Windows-specific
2016-08-12 10:34:16,626 : DEBUG : R Output Reader : RBinUtil :  :  : External Rscript process output:
2016-08-12 10:34:17,441 : DEBUG : R Error Reader (port:52499) : RController :  :  : Lade n"otiges Paket: RcppArmadillo
2016-08-12 10:34:17,694 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:34:17,694 : DEBUG : R Error Reader (port:52499) : RController :  :  : Attache Paket: 'dplyr'
2016-08-12 10:34:17,694 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:34:17,694 : DEBUG : R Error Reader (port:52499) : RController :  :  : The following objects are masked from 'package:stats':
2016-08-12 10:34:17,694 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:34:17,694 : DEBUG : R Error Reader (port:52499) : RController :  :  :     filter, lag
2016-08-12 10:34:17,695 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:34:17,696 : DEBUG : R Error Reader (port:52499) : RController :  :  : The following objects are masked from 'package:base':
2016-08-12 10:34:17,696 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:34:17,696 : DEBUG : R Error Reader (port:52499) : RController :  :  :     intersect, setdiff, setequal, union
2016-08-12 10:34:17,696 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:34:17,699 : DEBUG : R Error Reader (port:52499) : RController :  :  : Lade n"otiges Paket: ggplot2
2016-08-12 10:34:17,716 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:34:17,716 : DEBUG : R Error Reader (port:52499) : RController :  :  : Attache Paket: 'cowplot'
2016-08-12 10:34:17,716 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:34:17,716 : DEBUG : R Error Reader (port:52499) : RController :  :  : The following object is masked from 'package:ggplot2':
2016-08-12 10:34:17,716 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:34:17,716 : DEBUG : R Error Reader (port:52499) : RController :  :  :     ggsave
2016-08-12 10:34:17,716 : DEBUG : R Error Reader (port:52499) : RController :  :  :
2016-08-12 10:38:15,183 : DEBUG : main : NodeContainerEditPart :  :  : R To R 0:36 (CONFIGURED)
2016-08-12 10:38:15,183 : DEBUG : main : NodeContainerEditPart :  :  : Add Table To R 0:35 (EXECUTED)
2016-08-12 10:38:16,505 : DEBUG : main : NodeContainerEditPart :  :  : Add Table To R 0:35 (EXECUTED)
2016-08-12 10:38:16,505 : DEBUG : main : NodeContainerEditPart :  :  : R To R 0:36 (CONFIGURED)
2016-08-12 10:38:18,686 : DEBUG : R Error Reader : RBinUtil :  :  : External Rscript process error output: Warnmeldung:'memory.limit()' is Windows-specific

 

 

Failed XLS Writer

$
0
0

Hi Knimers

I was working in a file with 519.728 rows and 26 columns but I couldn't export using XLS Writer.

I exported pretty fast with the CSV Writer but I still want to understand what happened with XLS Writer .

I've got the messages bellow:

1st Try: ERROR XLS Writer           0:7        Execute failed: GC overhead limit exceeded

2st Try: ERROR XLS Writer           0:8        Execute failed: Java heap space

Tks for your support


Database Looping Bug

$
0
0

Hi everybody,

I think there is a bug in the Database Looping node.

I have this failure during execution :

  • ERROR Database Looping     11:521     Execute failed: No group 3

Maybe, could you give me a solution ?!? :-)

I make a small workflow to show you the bug, In my real workflow, the error is not at the same row !!! !!! !!!

You can find the work flow here

You can see the error on this picture.

<img src="http://ringtarget.com/DatabaseLooping_bug.png">

 

Gradient Boosted Trees to PMML/SQL

$
0
0

Hi,

i have a question about Gradient Boosted Trees Learner node. Can I somehow export/save learned model as PMML, and then convert it to SQL query? I know how to do it with Tree Ensemble Learner node, but can't do the same with Gradient Boosted Trees Learner.

Why I'm asking? I need to integrate my learned model with SQL Server Warehouse, so i see to 2 solutions:

1. Create table-value function based on model converted to SQL query,

2. Integrate it somehow with SSIS, but i don't know how to export Gradient Boosted Trees model to something useful for SSIS/C#.

I will be really grateful for any help.

show clusters in binary images using dbscan

$
0
0

Hi all,

I generated a few png files (binary images) where I am trying to detect clusters created by white dots.

I searched the forum and found "Image to DataRow" node which is able to turn the image into an array of zeros and ones, so I made List Files -> Image Reader -> Image To DataRow but I can't get any clusters when I connect that to DBSCAN.

How can I convert these binary images to a format DBSCAN understands?

Thanks for your time

onur

Twitter Search

$
0
0

Hello all,

i'm completely new into KNIME and for my paper at my university I need to analyse data for sentiment analysis.

I created a twitter developer account, configured my Twitter API with API Keys etc and execution worked well.

Now I want to mine some Twitter Data with the Node: "Twitter Search". After the configuration of this node, the execution fails with this information :

"Execute failed: 400:The request was invalid. An accompanying error message will explain why. This is the status code will be returned during version 1.0 rate limiting(https://dev.twitter.com/pages/rate-limiting). In API v1.1, a request without authentication is considered invalid and you will get this response.
message - Bad Authentication data."

Could anyone give me a tip for getting this node work properly ?

https://dev.twitter.com/rest/public/search, https://www.knime.org/blog/knime-twitter-nodes and https://www.knime.org/blog/using-knime-to-find-out-what-your-users-are-t... coudnt give me any help on this problem.

Thanks!

Automatisation Nodes + Server

$
0
0

Hello,

I had a look at the https://www.knime.org/knime-personal-productivity features and the Knime Automatisation Nodes (e.g. "Call local workflow" with reports) are exactly what I need to replace the the old fashioned KNIME batch calls, Im currently using as a workaround.

Are these nodes included in the Server licence though (so the server can run them as well)? And if yes, can they be used as well by the people working with the server (up to the license limit) to try it lcoally before uploading it to the server?

Chemical Sketcher

$
0
0

To whom it may concern,

I have got the problem that the chemical sketcher does not allow to edit a structure. If you press "edit" the structure is shown in the editor but the changes of the structure does not affect the structure on the left (see picture). Instead (if you ticked the button, "add current structure on exit") it is obviously attached at the end.

In my mind you are not able to edit a drawn structure at all using this node. It would make the node the most powerful tool if you were allowed to edit the structure.

Thank you very much for your help

Best

DeepLearning4J Data Set

$
0
0

Hey,

can somebody please tell me the origin of the data set from the DeepLearning4J example workflow(s)? I am not really interested in the source, more in the meaning.

Best,
Marc


JPMML Classifier node error when appending probability columns

$
0
0

Hello everyone,

I am getting an error when using the JPMML Classifier node. I am modeling binary classes (0/1) and I am using a PMML export from IBM SPSS Modeler. The data have 82 fields. I read the PMML with the "PMML Reader" node and score my data with the "JPMML Classifier" node. Using the "JPMML Classifier" node as-is is working as expected: I get a prediction field appended to my data with binary labels.

But, when I check the "Append columns with normalized class distribution" under the "Options" tab in the node dialog, the node outputs the following error upon execution:

ERROR JPMML Classifier     0:12       Execute failed: Cell count in row "Row0" is not equal to length of column names array: 84 vs. 82

Looks like the node cannot write the probability columns to the data, or change the data model.

Any ideia how to fix this issue?

Thanks in advance!

Restore my Local Workspace

$
0
0

I tried importing workflow into my Local Workspace using 'Import Workflow' . After that the Local Workspace in the KNIME Explorer is no longer active. I've tried to 'Reset Perspective' but still not working. Browsing, outside of KNIME, to the directory where the knime workspace is located (following the local path), am able to see all my workflows there. But now I cant load them onto KNIME. The attachment shows my Explorer as of now.  Help

SPSS PMML executed on KNIME

$
0
0

Hi Guys, good evening.

I´m leading an Advanced Analytics program in a global company and actually we are studying many technologies to create our environment.

By now we are developing models at SPSS modeler and exporting it to PMML, which lead us to teh question; how do we execute these models?

I proposed using KNIME to execute them, although I´m facing some problems.
Here is the thing; I create a General Regression Model in SPSS and exported to PMML. Then I created a simple flux in KNIME that ran succefully.

Although, now while I´m trying to do the same for a neural network (which is 90% of our existing models) I´m facing problems...

1º Thing is that Knime is requesting each Row as I collumn, for example. If I want to predict Collumn A, then the PMML creates several fields based on this field and the number of rows. For example: If there are 3 lines, then we are going to have A1, A2, A3. Although if I have a hundred lines should I really be requested to create these columns? (hope and imagine I´m bypassing a step)

2º Point is that even if I create all these collumns, I keep geting these errors:

ERROR PMML Predictor       0:3        Only logistic activation function is supported in KNIME MLP.
ERROR PMML Predictor       0:3        Execute failed: org.knime.core.data.def.StringCell cannot be cast to org.knime.core.data.DoubleValue

I might be running into very beginner problems, but it would be nice to receive some direction in order to put KNIME "in production" here

Thanks in advance!

No idea for a title

$
0
0

Hey KNIME team,

I found a massive bug today:

When opening multiple workflows consecutively, each workflow becomes a unique number, but none will ever get a one. I hope you can fix this ASAP.

Best,
Marc

java snippet seems to execute twice (on reset, and on execution)

$
0
0

I want to start up a Tableau session as part of a NIME workflow, while allowing the workflow to continue. 

I am using a java edit variable node to do this, vai a Runtime import. I have attached a screenshot of the snippet code I'm using. The code works, but I find that each time the node is reset, the content executes again, so I get two Tableau sessions initiiated each time.

Is there a way to prevent this duplication? 

David

 

 

Viewing all 4157 articles
Browse latest View live