Advertising packets before payload sending

Hello,

I am working on a project where I have to send 5 packets of 244 bytes every 15 minutes.

In the post https://www.onsemi.com/forum/t/not-achieving-maximmum-bitrate-2mbps/788 gave me some advises in how to do it.

Now I am able to send those packets queued with DLE and MD Bit forced to 0 in a 7.5ms interval connection. But I am not getting the behaviour it supposed to be (the graph can be seen in the post I’d said)

What I am getting is the next measurements and behaviours:
Image 1:
image
Image 2:
image
Image 3:
image

As you can see in the graphs above, i have a big advertising intervals (Image1) which I am not able to reduce neither in time nor in number of intervals.
In Image 2, you can see that after the connection occurs, the transmission reconfigures to increase the bitrate successfully but it continues advertising before sending the payload (you can see the payload sending in Image 3).

NOTE: For developing this project, I am modifying the peripheral_server_sleep_ext sample from CMSIS pack and using a GVB1 RSL10 SIP Development board.
As a central client I am using nRF Connect BLE debugger on my Android Smarphone.

Then I would like to ask you:

  • How can I reduce/eliminate the advertising packets which are unnecessary to connect to Rx successfully?
  • Why there are some advertising packets after the connection begins? How could I remove them
  • Why is taking so long to send the payload?

If you can answer my question I would be very grateful.

Thank you very much from advance,

Enric Puigvert

1 Like

Hi @epuigvert,

As a first step, it might be best to align on the DLE and MD Bit configurations to ensure they are happening as expected.

I have attached the sample project, based on our β€˜peripheral_server_sleep’ sample code, that implements the 5 Packet DLE and MD Bit functionality that I shared with you in the previous thread. I did not experience the extra Advertising Packets (as advertising is automatically cancelled after a Connection Request, and has to be directly restarted) and I also did not see the long payload time.

Can you please run this code and verify that you are seeing the expected Connection behavior? You should expect to see 5 packets sent every 10th Connection Intervals (similar to the screenshot in the other thread), with all of the other Connections Intervals only containing the small Connection Supervision packet.

If you see the proper behavior when using this provided code, the next steps are to properly implement the same features in the β€˜peripheral_server_sleep_ext’ sample code.

peripheral_server_sleep_5packet.zip (129.5 KB)

Hi @brandon.shannon ,

Thank you very much for your fast answer.

I’ve runned your project and this is what I can measure:
image

I can see every 10th connection intervals the sending of the 5 packets as you said, but as you can see in the graph above in 2.2secs I have a burst of the 5packets but in a higher bitrat. I don’t know if that matches the behaviour you where describing to me. Here you have a zoom of it:

image

Also I can se som negative spikes which I’m not sure they are expected.

Thank you very much

Hi @epuigvert,

I think I see what you mean that the 5 TX Packets look to take ~500ms to send, though it is hard to tell with the resolution of the screenshot. Is it possible to zoon in on the 5 packet area further to take a closer look, as it seems to be very different than my ~12ms Connection Event that I shared in the previous thread.

Also, do you happen to know what the default Connection Interval that you Central device negotiates at connection time? During my testing, this was configurable and I set it to ~100ms to allow sufficient time for all 5 packets to be transmitted in a single Connection Interval. If you Central device is negotiating a Connection Interval of close to, or less than, the minimum of ~12ms, the RSL10’s BLE Stack will have to send the 5 packets in separate Connection Events, which could lead to a slower theoretical bitrate.

Hi @brandon.shannon!

I am sorry I’ve have a mistake on sending the zoom of the 5packets sending. I resend you now:


The graph above is corresponding to those faster packets I’d said you.

The next one is from the packets that seem to be sending correctly:

About the connection interval from my central device is not configurable but it shows me which is the configured interval connection. In my case is configuring it to 7.5ms as you can see in the image below:

But although this is much lower than the ~12ms you said to me, I can see that the 5 packets are sended in a single packet. Am I correct?

Thank you

Hi @epuigvert,

It is not entirely clear what is occurring in the first screenshot that you shared in the previous reply. As an initial guess, this could possibly be one of two things:

  1. This could be the Service Discovery or the MTU / PHY / DLE exchange that is triggered immediately following establishment of a Connection. These negotiations will usually only occur a single time at the start of the Connection. Do these irregularities show up after the Connection has been maintained for a longer time (>30s)?

  2. This could also be the RSL10 attempting to send the Notifications before the DLE has been fully negotiated. When a Notification of >27 bytes is sent without using DLE, it will be broken up into several consecutive 27byte notifications and transmitted. Again, do these irregularities continue after >30s of Connection time, when we are confident the DLE has been negotiated?


As for the Connection Interval used, the value of 7.5ms does not seem possible as the graph you shared above shows an Interval of >20ms between the Connection Events.

Do you have access to a more robust Central device that will let you control the Connection Interval that is set by default? I recommend our RSL10 BLE Dongle and the associated Bluetooth Low Energy Explorer if you have access.


Hi @brandon.shannon,

I had tried what you asked me and this is what I get:

After 2-3 minutes of connection maintained, the irregularities that we saw in the previous posts are gone, but now we have some packets that are sending before the 10th connection interval (in 1.6secs for example). Here you have a zoom in of this:

About another central device for now I amb waiting for a delivery with another Development Board EVB1 and the RSL10 BLE Dongle. There are some problems on the delivery that we are managing. I notify you when I receive these devices.

Thank you.

Hi @epuigvert,

Its good to hear the irregular packets are gone, and I therefore think it is safe to assume that they were composed of one of my suggested exchanges above.

As for the β€˜extra’ packets being sent, I do not think this is the case, and is instead a result of the event based operations not being entirely uniform in processing time.

As you can see, the 5 packets bursts are not always exactly 10 Connection Events apart. This is because the packet messages are simply sent to the BLE stack on every 10 Event count (as can be seen by the slightly longer normal Connection Events, this is the time required to communicate them to the stack [ie. At ~1.5s]).

The time it takes for these packet messages to be processed and actually transmitted might vary. That is why you always seen the Connection Events similar to ~1.5s on the exact 10th interval, but the actual TX events might occur 1-3 Connection Intervals later.

I believe this might be made more consistent by increasing the connection interval to allow more time for the packet processing.

1 Like

Hi @brandon.shannon,

Thank you very much for your answers, as it seems that the problem is solved, I will try to adapt my code to seem like yours.
I hope that now I would be able to send successfully my packets

Again, thank you very much

Enric Puigvert

1 Like

Hi @brandon.shannon,

I corrected my code with the modifications of the code you shared to me in the first post of this discussion.

Now I received the BLE RSL10 Dongle and a new EVB1 Development Board. I’d tried to connect my peripheral device (development board 1) to the Dongle and I didn’t have any problem.
Then I upload to the other development board (development board 2) the central_client_uart example and I tried to see if I can print by uart the packets that sends the peripheral device. My surprise was when I’d saw that nothing was received. The connection works (the central device has the solid led on) but never prints the packets.

My hypothesis is that the central device never discovers the custom service and never requests data. I am wrong?

Thank you for advance,

Enric Puigvert

Hi @epuigvert,

As a first step, it might make sense to setup the β€˜peripheral_server_uart’ & β€˜central_client_uart’ sample firmware to test and ensure that your UART terminal setup is ready to operate on both ends of the link.

After you have verified that these two sample apps work together and you can see the UART data being transmitted, you can move on to debugging the β€˜peripheral_server_sleep’ implementation.

To ensure that the service discovery is happening as expected, you can monitor the β€˜CustomService_ServiceEnable()’ function within β€˜ble_custom.c’ of the β€˜central_client_uart’ firmware code to see when the Custom Service Discovery is executed, and you can also watch the β€˜GATTC_CmpEvt()’ function within the same file to see if any handling that involves the β€˜GATTC_DISC_BY_UUID_SVC’ operation has been triggered, signaling that the Custom Service has been discovered successfully.

If you can verify these two points as a start, we can rule them out and start looking further into the packet exchange that occurs after the Custom Service is enabled.

1 Like

Hi @brandon.shannon,

Thank you for your answer.

I tried to setup peripheral_server_uart and central_client_uart and I was able to send and receive data on both terminals. It seems to work correctly.

Then I configured on central_client_uart the printf utility with RTT configuration to see what its going on CustomService_ServiceEnable() and GATTC_CmpEvt(). In this functions I print a message that indicates that the funcion has executed (PRINTF("GATTC_CmpEvt Executed"); for example) and also I print the param->operation to see which operation is doing the GATTC.

To ensure that PRINTF function is working I print an initial message on starting program.

With this configurations I tried with my peripheral_server_sleep_ext modified sample to connect to the central device.

After the connection, I can see this on my J-Link RTT Viewer:
image

As you can see in the image above, the CustomService_ServiceEnable and the GATTC_CmpEvt are triggered successfully.
Also we can see that param->operation from gattc_disc_cmd struct is working good which corresponds to GATTC_DISC_BY_UUID_SVCoperation. This print is been triggered inside the GATTC_CmpEvt.

With this data I think I can affirm that the service discovery is happenning as we expect. (correct me if I am missing something, please)

Thank you very much,

Enric Puigvert

Hi @epuigvert,

I agree with your conclusion above that these PRINTFs seem to indicate that Service Discovery has been executed successfully.

The next step that I recommend is to do a similar validation on two other points:

  1. On the Peripheral device, use a PRINTF to check what the Status of the β€˜GATTC_CMP_EVT’ that corresponds with the β€˜GATTC_NOTIFY’ operation. By checking the β€˜status’ & β€˜operation’ variables of the Complete Event Data Structure, you should be able to determine if the Notification was sent to the BLE Stack properly.
  2. Use PRINTF to determine if the β€˜GATTC_EVENT_IND’ on the Central Device’s Custom Service is being triggered by the Notification as expected. For every Notification received, this handler should start executing to determine which Characteristic handle the Notification is directed at.

hi @brandon.shannon,

Thank you for your recommendations.

I’d tried to do a PRINTF at GATTC_CMP_EVT but I can’t because of the deep sleep mode. As I read on this thread of onSemi’s forums (KB: How to activate printf at see the output at terminal serial console - #6 by martin.bela) the PRINTF is disabled. Also I’d tried to configure the printf_init() on wakeup the device but is not working at all. How do I have to configure it? If it is impossible there is an alternative to see those events? (As you surelly know, also I can’t use the debug function as well)

This one yes I was able to print it. Also I printed some other functions and events to try to find where is the problem. With them I discovered that when the central device connects to peripheral device, it only triggers the GATTC_EVENT_IND when is doing the connection. Then, never happens any more (as you can see in the screenshot below).

image

So I think we found the problem… What are the next steps?

Thank you very much,

Enric Puigvert

Hi again @brandon.shannon.

I was investigating with my peripheral device and I found that when I change the condition (after connections begins) for sending packets from if ((cs_env.tx_value_changed) && (cs_env.tx_cccd_value & 1)) to if (cs_env.tx_value_changed) //&& (cs_env.tx_cccd_value & 1)), I saw that the GATTC_EVT_IND begins to appear. Then, in my central device, in ble_custom.c``GATTC_EvtInd() function, I’d did PRINTF of param->value and surprisingly I’d saw the data I was sending from the peripheral server.

So now I can say I am receiving data from peripheral to central device, but I can’t print it on UART terminal.

What do you think about it?

Hi @epuigvert,

By comparing the two peripheral sample codes, I can see what you are saying.

This is actually an implementation error in our β€˜peripheral_server_uart’ sample code. Normally, a peripheral device should always check the CCCD of a Characteristic before sending a Notification or Indication to check if they have been enabled. Our β€˜peripheral_server_uart’ does not perform this check like it should.

As you can see in the β€˜CustomService_Env_Initialize()’ function, both samples will set the CCCD value to 0x1 by default (enable Notifications), but following a connection to the β€˜central_client_uart’ sample code will see that the Central does not want them enabled by default and return them to 0x0 (disabled). It is then up to the Central device to write and updated CCCD value to enable Notifications again.

The proper way to way to achieve the Notification functionality you are looking for is to:

  1. Ensure the peripheral firmware always checks the CCCD before sending Notifications or Indications. Bit 0 will enable Notifications, Bit 1 will enable Indications (ie. 0x1 - Notifications Only, 0x2 - Indications Only, 0x3 - Both).
  2. Central will usually set the CCCD value to 0x0 after a Connection is formed. Fix this by using the Central device to write to the CCCD Custom Service Characteristic after Service Discovery is completed. This is the safer fix to removing the CCCD check that you have tested in your reply.

As for ensuring that your data is being passed to the Central devices UART interface, the next step is to check and see if the β€˜UART_FillTXBuffer()’ function is being reached as expected, and also what the code is returned from the function once execution completes. It is possible that some of the logic in the β€˜GATTC_EvtInd()’ is preventing the β€˜UART_FillTXBuffer()’ function from receiving your values, as you have already confirmed the values are received over BLE, but not printed from UART.

Hi @brandon.shannon,

I see what you are saying but I don’t know how to configure the central device to set CCCD to 0x1. Could you explain me how can I do it?

Also I can’t find where, in the code, central sets the CCCD value to 0x0 after the connection. I think I missing something…

About the UART_FillTXBuffer() function I can say to you that never triggers because unhandled_packets are always NULL. This doesn’t make any sense for me. If I receive data (the system is receiving it and I can print it over RTT PRINTFs) why it is that is never detected from the uart logic?

Thank you very much,

Enric Puigvert

Hi @epuigvert,

You can write the the Custom Service CCCD Value for a Characteristic the same way that you would write to the Characteristic Value itself. If you look in the β€˜GATTC_WriteReqInd()’ function for the β€˜CS_IDX_TX_VALUE_CCC’ handle on the Peripheral firmware, you will see the logic that applies the new CCCD to the Custom Service TX Characteristic.

This can be written to from Central using the β€˜CustomSrvice_SendWrite()’ function while targeting the Handle that is used for the Custom Service TX CCCD.

For the UART interaction, the following logic checks occur when the Central receives a Notification:

    if (param->length > 0 || unhandled_packets != NULL)
    {
        if (cs_env.disc_att[CS_IDX_TX_CHAR].pointer_hdl == param->handle)
        {
            memcpy(cs_env.tx_value, param->value, param->length);
            flag = 0;

            /* Start by trying to queue up any previously unhandled packets. If
             * we
             * can't queue them all, set a flag to indicate that we need to
             * queue
             * the new packet too. */
            while ((unhandled_packets != NULL) && (flag == 0))
            {
                if (UART_FillTXBuffer(unhandled_packets->length,
                                      unhandled_packets->data) !=
                    UART_ERRNO_OVERFLOW)
                {
                    /* Remove a successfully queued packet from the list of
                     * unqueued
                     * packets. */
                    unhandled_packets = removeNode(unhandled_packets);
                }
                else
                {
                    flag = 1;
                }
            }

            /* If we don't have any (more) outstanding packets, attempt to queue
             * the
             * current packet. If this packet is successfully queued, exit. */
            if (flag == 0)
            {
                if (UART_FillTXBuffer(param->length, cs_env.tx_value) !=
                    UART_ERRNO_OVERFLOW)
                {
                    return (KE_MSG_CONSUMED);
                }
            }
        }
    }
  • The first β€˜if(param->length > 0 || unhandled_packets != NULL)’ should pass given the β€˜param->length’ will likely be >0.
  • The second β€˜if(cs_env.disc_att[CS_IDX_TX_CHAR].pointer_hdl == param->handle)’ should pass if the handle being written to matches what is expected.
  • The β€˜while(unhandled_packets != NULL) && (flag == 0)’ loop will likely be skipped as it is only called if there are UART packets from the previous Notification that have not been processed yet.
  • The final β€˜if(UART_FillTXBuffer(param->length, cs_env.tx_value) != UART_ERRNO_OVERFLOW)’ statement is where the notification data should be sent to the UART to be printed.

I recommend that you insert debugging PRINTF statements to determine which of the above logic calls are not being passed successfully while routing the Notification data to the UART interface.

Hi @brandon.shannon
I did tried what you suggested with the CustomSrvice_SendWrite() function to write CCCD value to 1. It didn’t work. I think I am doing it wrong.
I defined the CustomSrvice_SendWrite() in ble_custom.c (and also wrote the prototype in .h file). Then I execute the function inside GATTC_DiscSvcInd() after the GATTC_DISC_CMD is being sended (inside the if as well). You can see what I’d wrote in the text below:
` int GATTC_DiscSvcInd(ke_msg_id_t const msg_id,
struct gattc_disc_svc_ind const *param,
ke_task_id_t const dest_id,
ke_task_id_t const src_id)
{
struct gattc_disc_cmd cmd;
PRINTF(β€œGATTC_DiscSvcInd executed\n”);
/
We accepts only discovered attributes with 128-bit UUID according to the
* defined
* characteristics in this custom service */
if (param->uuid_len == ATT_UUID_128_LEN)
{

	PRINTF("Discovered services \n");
    cs_env.state       = CS_SERVICE_DISCOVERD;

    cs_env.start_hdl   = param->start_hdl;
    cs_env.end_hdl     = param->end_hdl;

    cs_env.disc_attnum = 0;

    /* Allocate and send GATTC discovery command to discover
     * characteristic declarations */
    cmd = KE_MSG_ALLOC_DYN(GATTC_DISC_CMD,
                           KE_BUILD_ID(TASK_GATTC, ble_env.conidx),
                           TASK_APP, gattc_disc_cmd,
                           2 * sizeof(uint8_t));

    cmd->operation = GATTC_DISC_ALL_CHAR;
    cmd->uuid_len = 2;
    cmd->seq_num = 0x00;
    cmd->start_hdl = cs_env.start_hdl;
    cmd->end_hdl   = cs_env.end_hdl;
    cmd->uuid[0]   = 0;
    cmd->uuid[1]   = 0;

    /* Send the message */
    ke_msg_send(cmd);
    CustomSrvice_SendWrite(ble_env.conidx, (uint8_t *)cs_env.tx_cccd_value, CS_IDX_TX_VALUE_CCC, 0x00, 1, GATTC_WRITE);

}`

With this code I can see on my RTT Viewer this:

where when it says BLE env Service Enabled (2) it means that it has enter on the else if (param->operation == GATTC_DISC_ALL_CHAR && param->status == ATT_ERR_ATTRIBUTE_NOT_FOUND && cs_env.state == CS_SERVICE_DISCOVERD) in GATTC_CmpEvt() function.

I can’t understand what is going on. Could you give me some advice?


About the UART interaction, I can see that:

  • It goes inside if(param->length > 0 || unhandled_packets != NULL)
  • It don’t pass any other condition in GATTC_EvtInd() function.

What do you think about it?

Thank you very much for your advice,

Enric Puigvert

Hi @epuigvert,

For the CCCD, I think I see what is going wrong. By default, our sample code will perform the following steps during Service Discovery:

  1. Discover all of the Services by using the β€˜GATTC_DISC_CMD’ command with the β€˜GATTC_DISC_ALL_SVC’ operation.
  2. Each Service found will trigger a β€˜GATTC_DISC_SVC_IND’ message that will contain the Service UUID and Handle Start and End value.
  3. If the UUID matches the Custom Service, a β€˜GATTC_DISC_CMD’ command with the β€˜GATTC_DISC_ALL_CHAR’ operation is dispatched using the Handle Start and End for the Custom Service to find all of the Characteristics within the Custom Service.
  4. Each Characteristic found will trigger a β€˜GATTC_DISC_CHAR_IND’ message that will contain the Handle Start value.

In order to find the CCCD Handle that is required to pass to the β€˜CustomSrvice_SendWrite()’ so that the CCCD Descriptor can be written, you have to add the following functionality:

  1. Using the Custom Service Start and End Handle, along with the Start Handle for each Characteristic, use the β€˜GATTC_DISC_CMD’ command with the β€˜GATTC_DISC_DESC_CHAR’ operation to find all of the Descriptor within a certain handle range (ie. If CHAR1 starts at handle 0x5 and CHAR2 starts at handle 0x9, you will perform the Descriptor Discovery in the range of 0x5-0x8 to find the CHAR1 Descriptors).
  2. Each Descriptor found within the handle range will trigger a β€˜GATTC_DISC_CHAR_DESC_IND’ message that will contain the UUID and Handle of the Descriptor.
  3. CCC Descriptors have a pre-defined UUID value (0x2902). By comparing the UUID of each Descriptor found with 0x2902 (or the β€˜#define ATT_DESC_CLIENT_CHAR_CFG_128’ macro in β€˜ble_std.h’) you can determine which handle is associated with a certain Characteristics CCCD value.
  4. Using this handle, call the β€˜CustomSrvice_SendWrite()’ and write the desired CCCD to bits 0 and 1.

As for the UART interaction, I think it is safe to conclude that the handle that you are writing to on Peripheral does not match the handle of the β€˜CS_IDX_TX_CHAR’ that is discovered by the Central device. Can you possibly add a PRINTF/RTT statement before the handle check to see what the two values being compared are (ie. values of β€˜param->handle’ & β€˜cs_env.disc_att[CS_IDX_TX_CHAR].pointer_hdl’)?