We are currently working on the forum. For the short-term, all forum content will be in read-only format. We apologize for the interruption and look forward to collaborating with you shortly. All the best in your research!

Need urgent help on large dataset extraction



  • ebsebs Posts: 137 ✭✭
    What is your setting for maxParameterCount in server.xml for the connector you are using? Make sure you set the correct one based on the port/method you are connecting with.

    The bug listed above mentions setting it to 100000 but the following bug describes that that fix doesn't work due to the size of the number.


    I would try setting it to just over the number of variables OC says you have e.g. 11100

    Also look for any log that might give you an indication of memory issues and try an monitor memory usage when you are actually performing the extract.

  • RCHENURCHENU Posts: 207 ✭✭
    edited July 2017
    This is what I have for the moment in server.xml:

    Connector port="8080" protocol="HTTP/1.1"

    Should I set maxParameterCount to 11100 like you said ?

    I didn't change that section (still between comments):
    Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
    maxThreads="150" scheme="https" secure="true"
    clientAuth="false" sslProtocol="TLS"

    I'm using an SSL connection and everything is redirect in my nginx.conf file :

    # HTTP server
    # define server on port 80 (http)
    server {
    listen 80;
    server_name [...];
    location /OpenClinica {
    rewrite ^/(.*)$ https://[...]/$1 redirect;
    # HTTPS server
    # define server on port 443 (https)
    server {
    listen 443;
    server_name [...];
    ssl on;
    ssl_certificate /usr/local/ssl/xxx.fr.crt;
    ssl_certificate_key /usr/local/ssl/xxx.fr.key;
    location ~ ^/OpenClinica/includes/(.*)$ {
    expires max;
    add_header Cache-Control "public";
    alias /usr/local/tomcat/webapps/OpenClinica/includes/$1;

    I didn't find anything in tomcat logs for the moment and when I monitor the RAM usage (with "top" or "watch -d free -m") when creating a dataset, the memory or even cpu doesn't almost change at all....

    What do you think ?
    Thanks for your help and time.


  • ebsebs Posts: 137 ✭✭
    If you are only seeing minimal impact on the machine then I would put that at the bottom of the list of possible causes.

    If you are using SSL on 8443 in server.xml I would try adding maxParameterCount to that section of the file.

    You could also look at the PostgreSQL log files for anything odd.

  • RCHENURCHENU Posts: 207 ✭✭
    edited July 2017
    I check postgresql log and there is nothing in it when I perform the extraction.

    From what I read, I shouldn't uncomment the section in server.xml file as everything is handle by nginx.

    Since there is nothing on the log, I have no clue what to do anymore.

    IT guy will try this week end to increase RAM and Core on the VM. Maybe this will help...

    But maybe the problem is only the 10000 limit from OpenClinica and there is nothing we can do.

    Have you ever tried to create a dataset using "select all" with more than 10000 items ?

    What bother me is that the extraction is already taking 7 min and study has just started
  • ebsebs Posts: 137 ✭✭
    Stumbled on your issue from 2016 - https://forums.openclinica.com/discussion/15980/large-dataset-extraction

    That parameter change worked for you before it seems. I guess this was before you implemented nginx?

    I have no experience with nginx but it's looking more like that may be the issue. Can you pass the maxParameterCount from nginx?
  • RCHENURCHENU Posts: 207 ✭✭
    I increased "keepalive_request" and "client_max_body_size" but it didn't do anything.
    I don't know if they are the rights parameters to change...

    If someone has an idea and knows nginx ?


Sign In or Register to comment.