Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - mytfg

Pages: [1]
1
Hi,

thanks for your reply  :)

So the part
Code: [Select]
"30","4.933379975","SMB","Negotiate Protocol Request"
"36","5.166984619","SMB","Negotiate Protocol Response"
takes quite a "long" time as well.

I just captured the traffic with Wireshark, too, and again got a similar delay between the Client Hello and the Server Hello.
The actual file / directory access in your caputure is faster than the initial connection as well.

Code: [Select]
"37";"1.085010466";"TLSv1.2";"Client Hello"  (Client to Zentyal)
"38";"1.085036481";"TCP";"ACK"  (Zentyal to Client)
...
"69";"1.512500216";"TLSv1.2";"Server Hello"  (Zentyal to Client)

Between packet number 38 and 69 there are several unrelated packets (broadcasts, VM control traffic, etc.).
This also applies to the case where I disable SSL, so I can see the bindRequest and bindResponse instead of Client Hello and Server Hello. The time difference, however, is the same.

2
Hello everyone,

using LDAP for PHP to create and query users from Zentyal, I certainly noticed that the `ldap_bind` call is quite slow compared to the completely same calls towards a Windows AD.

To switch from the Windows AD towards a Zentyal-based one, I set up a Zentyal 5 Server with a completely new AD which we are running in parallel for a year now so we have plenty of time for re-configuring dependent services.
Thus, my (self-implemented) user management platform creates and edits users in both directories simultaneously.

A few months ago, I updated to Zentyal 6 and I think - even though I'm not entirely sure the problem did not exists with 5.1 - the performance drop started there.

The LDAP binding takes around 300ms when binding to zentyal. The time for arbitrary queries afterwards is neglectible.

When binding to the windows AD, it usually takes less than 10ms.

While 300ms is still acceptable, I would really like to know why there is such a difference between the systems and if it is possible to speed up the bind process.

Using wireshark (or tshark), I tried to capture the calls from the frontend server towards the Zentyal Server - the ldap_bind_request takes 300ms to get a reply. However, it receives an ACK nearly immediately.

I tried both, binding to the hostname as well as to the IP address. However, there seems to be no difference since the bind response is slow, the bind request is issued without delay.

This is the PHP time measurement:
Code: [Select]
"New LDAP Connection for Domain 1 (Windows)",
"Connection Time for Domain: 0.0077838897705078s",
"New LDAP Connection for Domain 2 (Zentyal)",
"Connection Time for Domain: 0.2680070400238s"

Edit:
Just another info:
When opening and closing many connections (at max 20 simultaneous connections, but ~3000 consecutives in total), both bind requests get slower. However, then I get about 17 ms for the Windows AD (i.e., about twice the time for a bind) compared to 2.7 seconds = 2700ms (i.e., 10 times slower).

Edit 2:
With the high amount of queries, the delay increases to 10 seconds or even higher. CPU load on the server is at 100%. The top 10 processes when it comes to CPU utilization are as follows:
Code: [Select]
79.5 29707 MYTFG\c+ /usr/sbin/smbd -D --option=server role check:inhibit=yes --foreground
35.5 29703 root     /usr/sbin/samba --foreground --no-process-group
24.3 29679 MYTFG\i+ /usr/sbin/smbd -D --option=server role check:inhibit=yes --foreground
24.3 29664 MYTFG\i+ /usr/sbin/smbd -D --option=server role check:inhibit=yes --foreground
18.7 29626 MYTFG\m+ /usr/sbin/smbd -D --option=server role check:inhibit=yes --foreground
14.4 29662 MYTFG\b+ /usr/sbin/smbd -D --option=server role check:inhibit=yes --foreground
12.8 29650 MYTFG\e+ /usr/sbin/smbd -D --option=server role check:inhibit=yes --foreground
10.2 29631 MYTFG\n+ /usr/sbin/smbd -D --option=server role check:inhibit=yes --foreground
 9.7 29532 root     /usr/sbin/smbd -D --option=server role check:inhibit=yes --foreground
 8.6 29596 MYTFG\s+ /usr/sbin/smbd -D --option=server role check:inhibit=yes --foreground

So what internal process can cause this high CPU load?

Edit 3:
Some CPU load was caused by the high logging level I had set - with logging disabled I get lower bind times, but still much slower than when binding to the Windows DC.

I would appreciate any ideas and suggestions.

Best regards  :)

Pages: [1]