Skip to main content
POST
/
v1beta
/
models
/
gemini-2.5-flash-nothinking:generateContent
gemini-2.5-flash-nothinking (Text to Text)
curl --request POST \
  --url https://api.example.com/v1beta/models/gemini-2.5-flash-nothinking:generateContent

API Key Authentication

GPTProto API uses Bearer token authentication. All API requests must include your API key (sk-xxxxx) in the Authorization header.

Getting Your API Key

  1. Sign up for a GPTProto account at https://gptproto.com
  2. Navigate to the API Keys section in your dashboard
  3. Generate a new API key
  4. Copy and securely store your API key For authentication details, please refer to the Authentication section.

Initiate Request

curl --location --request POST 'https://gptproto.com/v1beta/models/gemini-2.5-flash-nothinking:generateContent' \
--header 'Authorization: GPTPROTO_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
  "contents": [
    {
      "role": "user",
      "parts": [
        {
          "text": "who are you?"
        }
      ]
    }
  ]
}'

Response Example

{
    "candidates": [
        {
            "content": {
                "role": "model",
                "parts": [
                    {
                        "text": "I am Gemini, a large language model built by Google.",
                        "thoughtSignature": "CvUOAY8***"
                    }
                ]
            },
            "finishReason": "STOP"
        }
    ],
    "usageMetadata": {
        "promptTokenCount": 4,
        "candidatesTokenCount": 12,
        "totalTokenCount": 508,
        "trafficType": "ON_DEMAND",
        "promptTokensDetails": [
            {
                "modality": "TEXT",
                "tokenCount": 4
            }
        ],
        "candidatesTokensDetails": [
            {
                "modality": "TEXT",
                "tokenCount": 12
            }
        ],
        "thoughtsTokenCount": 492
    },
    "modelVersion": "gemini-2.5-flash-nothinking",
    "createTime": "2025-12-22T13:15:39.906733Z",
    "responseId": "e0RJae2rN46R998P096OmQU"
}

Parameters

Path Parameters

ParameterTypeRequiredDefaultRangeDescription
modelstring✅ Yes--Model ID used to generate the response, formatted as models/{model}.
methodstring✅ Yes-generateContent
streamGenerateContent
Method to use for content generation.
Google Gemini API provides two methods for content generation, distinguished by whether they return responses incrementally (streaming) or all at once (non-streaming):
MethodExampleDescription
generateContenthttps://gptproto.com/v1beta/models/gemini-2.5-pro:generateContentGenerates a complete response all at once. Best for applications where you need the full response before processing.
streamGenerateContenthttps://gptproto.com/v1beta/models/gemini-2.5-pro:streamGenerateContentStreams the response incrementally as it’s generated. Ideal for chat interfaces and real-time applications where latency is important.

Core Parameters

ParameterTypeRequiredDefaultRangeDescription
contentsarray✅ Yes--Content of the current conversation with the model. For single-turn queries, this contains one instance. For multi-turn queries (e.g., chat), this contains the conversation history and the latest request.
>contents.rolestring✅ Yes-user
model
The role of the message sender.
>contents.partsarray✅ Yes--The content parts of the message, which can contain different types of content (text, inlineData, etc.).
>>contents.parts.textstring✅ Yes--Text content of the part. For multimodal input details, see Multimodal Input.

Advanced Parameters

ParameterTypeRequiredDefaultRangeDescription
toolsarray❌ No--List of tools the model may use to generate the next response. Supported tools include Function and codeExecution.
toolConfigobject❌ No--Configuration for any tools specified in the request.
safety_settingsarray❌ No--List of unique SafetySetting instances for filtering unsafe content. Each SafetyCategory should have at most one setting. see SafetySetting
generation_configobject❌ No--Configuration options for content generation.
>generation_config.temperaturenumber❌ No-0.0-1.0Controls the randomness of the output. Lower values produce more deterministic results.
>generation_config.top_pnumber❌ No-0.0-1.0Nucleus sampling probability threshold.
>generation_config.top_kinteger❌ No--Top-k sampling parameter.
>generation_config.max_output_tokensinteger❌ No--Maximum number of tokens to generate.
>generation_config.thinking_configobject❌ No--Configuration for thinking functionality. If set for models that don’t support thinking, the system will return an error. See Thinking Config for details.
>generation_config.image_configobject❌ No--Configuration for image generation. If set for models that don’t support these configuration options, the system will return an error. See Image Config for details.
>generation_config.mediaResolutionenum❌ No-MEDIA_RESOLUTION_UNSPECIFIED
MEDIA_RESOLUTION_LOW
MEDIA_RESOLUTION_MEDIUM
MEDIA_RESOLUTION_HIGH
If specified, uses the specified media resolution.
Note: This field describes the resolution of input media. To control the resolution of output images, use the imageConfig field instead.

Multimodal Input

{
  "contents": [
    {
      "role": "user",
      "parts": [
        {
          "inline_data": {
            "mime_type": "image/jpeg",
            "data": "base64-encoded-image-data"
          }
        },
        {
          "text": "Describe this image."
        }
      ]
    }
  ]
}
parameterTypeRequiredDefaultRange / ExampleDescription
contents.partsarray✅ Yestext
inlineData
fileData
The content parts of the message, which can contain different types of content .
>contents.parts.inlineDataobject❌ No--Inline media content. If used, data must be base64-encoded.
>>contents.parts.inlineData.mimeTypestring✅ Yes (if inline_data is used)-application/pdf
image/jpeg
The IANA-standard MIME type of the source data. If the provided MIME type is not supported, the system will return an error.
>>contents.parts.inlineData.datastring✅ Yes (if inline_data is used)--Base64-encoded media data.
>contents.parts.fileDataobject❌ No--File media content. If used, fileUri must be provided.
>>contents.parts.fileData.mimeTypestring✅ Yes (if file_data is used)-application/pdf
image/jpeg
The IANA-standard MIME type of the source data. If the provided MIME type is not supported, the system will return an error.
>>contents.parts.fileData.fileUristring✅ Yes (if file_data is used)--The URI of the file to be processed.

Safety Settings

{
  "safety_settings": [
    {
      "category": "HARM_CATEGORY_HATE_SPEECH",
      "threshold": "BLOCK_MEDIUM_AND_ABOVE"
    },
    {
      "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
      "threshold": "BLOCK_HIGH_AND_ABOVE"
    }
  ]
}
parameterTypeRequiredDefaultRange / ExampleDescription
categorystring✅ Yes-HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_SEXUALLY_EXPLICIT
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_CIVIC_INTEGRITY
The harm category to apply the safety setting to.
thresholdstring✅ Yes-BLOCK_ONLY_HIGH
BLOCK_MEDIUM_AND_ABOVE
BLOCK_LOW_AND_ABOVE
BLOCK_NONE
The threshold for blocking content.

Generation Config

{
  "generation_config": {
    "temperature": 0.7,
    "top_p": 0.95,
    "top_k": 40,
    "max_output_tokens": 1024,
    "stop_sequences": ["Human:"],
    "response_mime_type": "text/plain"
  }
}
parameterTypeRequiredDefaultRange / ExampleDescription
temperaturenumber❌ No-0.0-1.0Controls the randomness of the output.
top_pnumber❌ No-0.0-1.0Nucleus sampling threshold.
top_kinteger❌ No--Top-k sampling parameter.
max_output_tokensinteger❌ No--Maximum number of tokens to generate.
stop_sequencesarray❌ No--Sequences at which to stop generation.
response_mime_typestring❌ No-text/plain
application/json
MIME type of the response.

Thinking Config

{
  "generation_config": {
    "thinking_config": {
      "include_thoughts": true,
      "thinking_budget": 1000,
      "thinking_level": "HIGH"
    }
  }
}
Note: thinking_level is only supported on Gemini 3.0 and above. It cannot be used together with thinking_budget; doing so will return an error.
parameterTypeRequiredDefaultRange / ExampleDescription
thinking_configobject❌ No--Configuration for thinking functionality.
>thinking_config.include_thoughtsboolean❌ No--Indicates whether to include thoughts in the response. If true, thoughts are only returned when thinking is enabled.
>thinking_config.thinking_budgetinteger❌ No--Specifies the maximum number of tokens for generated thoughts.
>thinking_config.thinking_levelenum❌ NoTHINKING_LEVEL_UNSPECIFIEDTHINKING_LEVEL_UNSPECIFIED
HIGH
LOW
Controls the maximum depth of the model’s internal reasoning process before generating a response. If not specified, the default is HIGH. Recommended for Gemini 3 or newer models. Using it with older models may cause errors.

Image Config

{
  "generation_config": {
    "image_config": {
      "aspect_ratio": "1:1",
      "image_size": "1k"
    }
  }
}
parameterTypeRequiredDefaultRange / ExampleDescription
image_configobject❌ No--Configuration for image generation.
>image_config.aspect_ratiostring❌ No-1:1
2:3
3:2
3:4
4:3
9:16
16:9
21:9
Aspect ratio of the generated image. If not specified, the model will select the appropriate aspect ratio based on the specified content.
>image_config.image_sizestring❌ No-1k
2k
4k
Approximate size of the generated image. If not specified, the model will use the default value of 1k.

Error Codes

Common Error Codes

Error CodeError NameDescription
401UnauthorizedAPI key is missing or invalid
403ForbiddenYour API key doesn’t have permission to access this resource, or insufficient balance for the requested operation
429Too Many RequestsYou’ve exceeded your rate limit
500Internal server errorAn internal server error occurred
503Content policy violationContent blocked due to safety concerns (actual status code is 400)