GATTC_CMP_EVT Strong Recommendation

According to RW-BLE-GATT-IS.pdf

“…should wait for GATTC_CMP_EVT of the current GATT request…”.

Unfortunately I have not found any sample code showing how to do that. Moreover, this is something I would expect the BLE stack takes care of e.g. queueing GATTC_SendEvtCmd’s.
Anyway, what is your recommendation (in form of a code snippet) to make an application as GATT Server (that goes into Sleep Mode) to wait for GATTC_CMP_EVT before sending the next request?

Hi @darrew,

I believe the ‘ble_central_client_bond’ sample application from our RSL10 CMSIS Pack demonstrates how to do a Long Write using this approach.

It is recommended because there is only so many messages that the Kernel queue can hold, and it is possible to run into situations where the scheduling will fail as the queue is full (ie. if you send a large number of Notifications without checking for the CMP event).

To prevent this, it is recommended to wait for the CMP event when queueing up multiple stack messages simultaneously (such as when executing a Long Write).

Below I have attached the case statement used enforce this recommendation within the sample. It is important to note that the ‘gattc_write_complete’ flag is checked in other parts of the code before queuing up additional Write Requests.

        case GATTC_CMP_EVT:
        {
            const struct gattc_cmp_evt* p = param;
            if (p->operation == GATTC_WRITE)
            {
                if (p->status == GAP_ERR_NO_ERROR)
                {
                    cs_env[conidx].gattc_write_complete = true;
                }

                if (p->status == GAP_ERR_DISCONNECTED)
                {
                    cs_env[conidx].gattc_write_complete = true;
                }
            }
        }
        break;

Hi @darrew

I think something similar can be found in the sample code for RSL10 SmartShot platform (SECO-RSL10-CAM-GEVB).

Here the GAPM_CMP_EVT is used together with a queue to transmit large amounts of picture data by using notifications. The queue is used to limit number of packets that are passed to BLE Stack with the GATTC_SEND_EVT_CMD, the stack has limited memory space for messages as Brandon mentioned.

I attached the whole app_ble_ptss.c file from the smartshot_demo_cam sample code that implements this if you wish to dig deeper.
app_ble_ptss.c (20.4 KB)

Here you might be interested in the PTSS_ImageDataPush and PTSS_MsgHandler functions that manage the flow of data. This approach should also be applicable when sleep mode is used.

PTSS_MsgHandler function code
static void PTSS_MsgHandler(ke_msg_id_t const msg_id, void const *param,
        ke_task_id_t const dest_id, ke_task_id_t const src_id)
{
    switch (msg_id)
    {
        case GATTC_CMP_EVT:
        {
            const struct gattc_cmp_evt *p = param;

            if (p->operation == GATTC_NOTIFY)
            {
                uint16_t attidx = ptss_env.att.attidx_offset
                                  + ATT_PTSS_IMAGE_DATA_VAL_0;

                /* If the sequence number is the number of Image Data Value
                 * attribute reduce number of pending packets. */
                if (p->seq_num == attidx)
                {
                    ENSURE(ptss_env.transfer.packets_pending > 0);

                    ptss_env.transfer.packets_pending -= 1;

                    if (ptss_env.transfer.bytes_queued
                        < ptss_env.transfer.bytes_total)
                    {
                        /* Not all data is queued yet.
                         * Notify application that PTSS is ready to accept image
                         * data for next packet. */
                        ptss_env.att.cp.callback(
                                PTSS_OP_IMAGE_DATA_SPACE_AVAIL_IND,
                                NULL);
                    }
                    else
                    {
                        /* Last data notification was transferred. */
                        if (ptss_env.transfer.packets_pending == 0)
                        {
                            /* Determine next state of PTSS. */
                            switch (ptss_env.att.cp.capture_mode)
                            {
                                case PTSS_CONTROL_POINT_OPCODE_CAPTURE_ONE_SHOT_REQ:
                                    ptss_env.transfer.state = PTSS_STATE_CONNECTED;
                                    ptss_env.att.cp.capture_mode = 0;
                                    break;

                                case PTSS_CONTROL_POINT_OPCODE_CAPTURE_CONTINUOUS_REQ:
                                    ptss_env.transfer.state = PTSS_STATE_CAPTURE_REQUEST;
                                    break;

                                default:
                                    INVARIANT(false);
                                    break;
                            }

                            /* Notify application that Image data transfer finished. */
                            ptss_env.att.cp.callback(
                                    PTSS_OP_IMAGE_DATA_TRANSFER_DONE_IND,
                                    NULL);
                        }
                    }
                }
            }

            break;
        }

        /* Cut rest of the function for snippet  */
PTSS_ImageDataPush function code
int32_t PTSS_ImageDataPush(const uint8_t* p_img_data, const int32_t data_len)
{
    if ((p_img_data == NULL)
        || (data_len <= 0)
        || (data_len > PTSS_IMG_DATA_MAX_SIZE))
    {
        return PTSS_ERR;
    }

    if (ptss_env.transfer.state != PTSS_STATE_IMG_DATA_TRANSMISSION)
    {
       return PTSS_ERR_NOT_PERMITTED;
    }

    /* Safe to retype.
     * Pointer is is used as read-only iterator over the data array.
     */
    uint8_t* p_data = (uint8_t*) p_img_data;
    int32_t bytes_left = data_len;

    while (bytes_left > 0)
    {
        /* Start to fill out new packet if value buffer is clear.
         *
         * Populate notification with first 4 bytes that contain data offset
         * from start of file.
         */
        if (ptss_env.att.img_data.value_length == 0)
        {
            memcpy(ptss_env.att.img_data.value, &ptss_env.transfer.bytes_queued,
                    PTSS_INFO_OFFSET_LENGTH);
            ptss_env.att.img_data.value_length += PTSS_INFO_OFFSET_LENGTH;
        }

        const uint32_t max_data_octets = PTSS_GetMaxDataOctets();
        const uint32_t avail_space = max_data_octets
                                     - ptss_env.att.img_data.value_length;

        ENSURE(avail_space > 0);
        ENSURE(avail_space < ptss_env.max_tx_octets);

        if (avail_space >= bytes_left)
        {
            /* All pending data can be fit into single packet. */
            memcpy(ptss_env.att.img_data.value + ptss_env.att.img_data.value_length,
                    p_data, bytes_left);

            ptss_env.att.img_data.value_length += bytes_left;
            ptss_env.transfer.bytes_queued += bytes_left;
            p_data += bytes_left;
            bytes_left = 0;

            /* Transmit packet if (avail_space == data_len) */
            if (ptss_env.att.img_data.value_length >= max_data_octets)
            {
                ENSURE(ptss_env.att.img_data.value_length == max_data_octets);

                PTSS_TransmitImgDataNotification();
            }
        }
        else
        {
            /* Pending data must be split into multiple packets.
             *
             * Fill as much data as possible into currently open packet.
             */
            memcpy(ptss_env.att.img_data.value + ptss_env.att.img_data.value_length,
                    p_data, avail_space);

            ptss_env.att.img_data.value_length += avail_space;
            ptss_env.transfer.bytes_queued += avail_space;
            p_data += avail_space;
            bytes_left -= avail_space;

            PTSS_TransmitImgDataNotification();
        }

        /* Check if EOF was reached. */
        if (ptss_env.transfer.bytes_queued >= ptss_env.transfer.bytes_total)
        {
            ENSURE(ptss_env.transfer.bytes_queued == ptss_env.transfer.bytes_total);

            /* Transmit any remaining image data. */
            if (ptss_env.att.img_data.value_length > PTSS_INFO_OFFSET_LENGTH)
            {
                PTSS_TransmitImgDataNotification();
            }
        }
    }

    return PTSS_OK;
}
1 Like

@brandon.shannon and @lukas.mandak Thank you for the prompt answer. I will try to summarize both approaches:

-In case of ‘ble_central_client_bond’ (GATT Client), the sample code sends messages to the kernel periodically and ONLY if GATTC_CMP_EVT is triggered (flag is true), i.e.

void CUSTOMSC_Timer(void)
{
   /* Restart timer */
    ke_timer_set(CUSTOMSC_TIMER, TASK_APP, CUSTOMSC_TIMER_200MS_SETTING);
...
        case WRITE_QUEUED:
        {
            if (CUSTOMSC_QueuedWriteRun() == 0)
        ....
}
uint8_t CUSTOMSC_QueuedWriteRun(void)
{
...
        if (cs_env[i].gattc_write_complete == false) -> check Flag
        {
            return (1); -> do not send message if GATTC_CMP_EVT not triggered
        }
...
     CUSTOMSC_PrepareWrite(...) -> send message (GATTC_WRITE_CMD) to kernel
...
     CUSTOMSC_ExecWrite(...) -> send message (GATTC_EXECUTE_WRITE_CMD) to kernel
...
 return (0) -> if finished
}
void CUSTOMSC_MsgHandler(...)
{
...
        case GATTC_CMP_EVT:
        {
            if (p->operation == GATTC_WRITE)
            {
            ....
                    cs_env[conidx].gattc_write_complete = true; -> Flag
...

Here, the sample code does not send messages to the kernel if GATTC_CMP_EVT is not triggered. The use of a timer (allows going back to the main loop and therefore it calls to the scheduler) and a flag are crucial.

-In case of the PTSS, a while loop does the job, i.e.

int32_t PTSS_ImageDataPush(...)
{
  while (bytes_left > 0)
 {
  ...
  bytes_left -= avail_space;
  PTSS_TransmitImgDataNotification();
  ...
  }
}
...
static void PTSS_TransmitImgDataNotification(void)
{
...
    GATTC_SendEvtCmd(0, GATTC_NOTIFY, ...);
    ptss_env.transfer.packets_pending += 1;
...
}

static void PTSS_MsgHandler(...)
{
...
        case GATTC_CMP_EVT:
        {
           ...
           ptss_env.transfer.packets_pending -= 1;
        }
...
}

Here, assuming PTSS_ImageDataPush is called just once, the application stays in the while loop until bytes_left<=0. Therefore, all messages (GATTC_NOTIFY) are sent to the kernel without checking that packets_pending is not 0, aren’t they? If yes, then there is no control flow. Is this correct?

The flow control is done outside in the application code which calls PTSS_GetMaxImageDataPushSize before each call to PTSS_ImageDataPush. This determines how many bytes is the PTSS able to process right now so that calls to PTSS_ImageDataPush are not blocking.
The formula is variable depending on the negotiated DLE size (Data Length Extension, see GAPC_LE_PKT_SIZE_IND message) and limited to max 3 queued packets:

/* Determine:
 * (A) Number of packets that can be queued.
 * (B) Number of data  that can fit into single packet.
 * (C) Amount of data that is already queued for transmission in next
 *     packet.
 *
 * avail = (A * B) - C
 */
avail_bytes = ((PTSS_MAX_PENDING_PACKET_COUNT
               - ptss_env.transfer.packets_pending)
              * (PTSS_GetMaxDataOctets() - PTSS_INFO_OFFSET_LENGTH))
              - (ptss_env.att.img_data.value_length
                 - PTSS_INFO_OFFSET_LENGTH);

@lukas.mandak Does the code guarantee that every message (GATTC_NOTIFY) sent to the kernel is followed by a GATTC_CMP_EVT ?

The GATT Specification states that GATTC_CMP_EVT is sent as response when the command is processed. For notifications it is the moment it is sent over the air.
The specification does not mention any error codes but I expect that you still get the CMP response with an error code if any error happens, but I have yet to see one.

Let me reformulate my question: does the PTSS code check that every message (GATTC_NOTIFY) sent to the kernel is followed by a GATTC_CMP_EVT BEFORE sending a new message?

The PTSS queues up to 3 messages with the GATTC_NOTIFY command before stopping to wait for the completion GATTC_CMP_EVT message. A sequence number is set in the command so the service can match the completion events with its own notifications.

This queuing is implemented mainly for older devices that do not have DLE or have limited support for DLE.
Smaller packet size means the possibility of multiple packet transmissions in single connection event.
Therefore there are multiple notifications queued to use the connection event more efficiently in these cases.

@lukas.mandak Thank you for the quick answer.
According to your statement

The PTSS queues up to 3 messages with the GATTC_NOTIFY command before stopping to wait for the completion GATTC_CMP_EVT message.

That means, the PTSS code does NOT wait for GATTC_CMP_EVT on every single message sent to the kernel (it “queues up to 3”). Right?

If this is the case then the PTSS code does NOT follow the strong recommendation of the RW-BLE-GATT-IS.pdf,

the application should wait for the GATTC_CMP_EVT of the current GATT request BEFORE making additional request.

Or, do I interpret the recommendation wrongly?

That is correct.
PTSS does not follow the recommendation from RW-BLE-GATT-IS exactly as it is written by having more than 1 GATTC message queued.

The RW-BLE-GATT-IS does not explain the specific reason in detail, which is limited memory space for Kernel messages.
So instead the PTSS follows the recommendation to not cause the underlying issue of running out of memory by setting a fixed maximum number of queued messages.
Queuing 3 messages with 255 bytes payload will not overwhelm the queue in that specific application.

The sequential execution that is also mentioned in the recommendation does not seem to be a problem either.

  • The stack seems to always send notifications in the correct order. (that’s just my observation)
  • PTSS payload has it’s own sequencing scheme to reassemble notifications if sent out of order.

Queuing 3 messages with 255 bytes payload will not overwhelm the queue in that specific application.

What is the maximum number of bytes that can be sent to the kernel without running into a full kernel queue?

I have found multiple similar heap memory definitions in the rwip_config.h header but I believe this one is the correct one for total kernel message heap size:

/// Kernel Message Heap
#define RWIP_HEAP_MSG_SIZE          4096

Maybe @brandon.shannon knows if this is the correct figure or not.

I have been experimenting with data transfers in the last weeks and some interesting conclusions can be reported.

The experiment setup is as follows:

  1. A client and a server establish a connection of 10 ms interval, a PHY rate of 1 Mbps, Data Length Extension of 247 and max MTU.
  2. The client uses ATT Write Commands to send data whereas the Server uses ATT Notifications.
  3. The ATT attribute data length is 10 bytes for the Write Commands and 244 bytes for the Notifications.
  4. The client sends ATT Write Commands as fast as possible (within a for loop) whereas the server implements the “strong recommendation” i.e. waits for the GATTC_CMP_EVT before sending the pending Notification.
  5. One Notification shall be sent by the Server per incoming Write Command.

Under these circumstances, implementing the “strong recommendation” is a bad option since the kernel does not trigger a GATTC_CMP_EVT fast enough (i.e. the application buffer with the pending-to-be-sent to the kernel is overwritten by new incoming Notifications). Even implementing a queue of size 3 does not solve the problem. Interestingly enough, if the Server does not implement the “strong recommendation” (i.e. waits for GATTC_CMP_EVT but it sends the Notifications to the kernel immediately) everything works fine.

The experiment shows that the kernel is not able to trigger GATTC_CMP_EVT as fast as it is able to handle messages for unknown reasons.

This means the “strong recommendation” should not be read as “strong”.