Skip to Content
User GuideConfigurationSettings

Qwen Code Configuration

Tip

Authentication / API keys: Authentication (Qwen OAuth vs OpenAI-compatible API) and auth-related environment variables (like OPENAI_API_KEY) are documented in Authentication.

Note

Note on New Configuration Format: The format of the settings.json file has been updated to a new, more organized structure. The old format will be migrated automatically. Qwen Code offers several ways to configure its behavior, including environment variables, command-line arguments, and settings files. This document outlines the different configuration methods and available settings.

Configuration layers

Configuration is applied in the following order of precedence (lower numbers are overridden by higher numbers):

LevelConfiguration SourceDescription
1Default valuesHardcoded defaults within the application
2System defaults fileSystem-wide default settings that can be overridden by other settings files
3User settings fileGlobal settings for the current user
4Project settings fileProject-specific settings
5System settings fileSystem-wide settings that override all other settings files
6Environment variablesSystem-wide or session-specific variables, potentially loaded from .env files
7Command-line argumentsValues passed when launching the CLI

Settings files

Qwen Code uses JSON settings files for persistent configuration. There are four locations for these files:

File TypeLocationScope
System defaults fileLinux: /etc/qwen-code/system-defaults.json
Windows: C:\ProgramData\qwen-code\system-defaults.json
macOS: /Library/Application Support/QwenCode/system-defaults.json
The path can be overridden using the QWEN_CODE_SYSTEM_DEFAULTS_PATH environment variable.
Provides a base layer of system-wide default settings. These settings have the lowest precedence and are intended to be overridden by user, project, or system override settings.
User settings file~/.qwen/settings.json (where ~ is your home directory).Applies to all Qwen Code sessions for the current user.
Project settings file.qwen/settings.json within your project’s root directory.Applies only when running Qwen Code from that specific project. Project settings override user settings.
System settings fileLinux: /etc/qwen-code/settings.json
Windows: C:\ProgramData\qwen-code\settings.json
macOS: /Library/Application Support/QwenCode/settings.json
The path can be overridden using the QWEN_CODE_SYSTEM_SETTINGS_PATH environment variable.
Applies to all Qwen Code sessions on the system, for all users. System settings override user and project settings. May be useful for system administrators at enterprises to have controls over users’ Qwen Code setups.
Note

Note on environment variables in settings: String values within your settings.json files can reference environment variables using either $VAR_NAME or ${VAR_NAME} syntax. These variables will be automatically resolved when the settings are loaded. For example, if you have an environment variable MY_API_TOKEN, you could use it in settings.json like this: "apiKey": "$MY_API_TOKEN".

The .qwen directory in your project

In addition to a project settings file, a project’s .qwen directory can contain other project-specific files related to Qwen Code’s operation, such as:

Available settings in settings.json

Settings are organized into categories. All settings should be placed within their corresponding top-level category object in your settings.json file.

general

SettingTypeDescriptionDefault
general.preferredEditorstringThe preferred editor to open files in.undefined
general.vimModebooleanEnable Vim keybindings.false
general.disableAutoUpdatebooleanDisable automatic updates.false
general.disableUpdateNagbooleanDisable update notification prompts.false
general.checkpointing.enabledbooleanEnable session checkpointing for recovery.false

output

SettingTypeDescriptionDefaultPossible Values
output.formatstringThe format of the CLI output."text""text", "json"

ui

SettingTypeDescriptionDefault
ui.themestringThe color theme for the UI. See Themes for available options.undefined
ui.customThemesobjectCustom theme definitions.{}
ui.hideWindowTitlebooleanHide the window title bar.false
ui.hideTipsbooleanHide helpful tips in the UI.false
ui.hideBannerbooleanHide the application banner.false
ui.hideFooterbooleanHide the footer from the UI.false
ui.showMemoryUsagebooleanDisplay memory usage information in the UI.false
ui.showLineNumbersbooleanShow line numbers in code blocks in the CLI output.true
ui.showCitationsbooleanShow citations for generated text in the chat.true
enableWelcomeBackbooleanShow welcome back dialog when returning to a project with conversation history. When enabled, Qwen Code will automatically detect if you’re returning to a project with a previously generated project summary (.qwen/PROJECT_SUMMARY.md) and show a dialog allowing you to continue your previous conversation or start fresh. This feature integrates with the /summary command and quit confirmation dialog.true
ui.accessibility.disableLoadingPhrasesbooleanDisable loading phrases for accessibility.false
ui.accessibility.screenReaderbooleanEnables screen reader mode, which adjusts the TUI for better compatibility with screen readers.false
ui.customWittyPhrasesarray of stringsA list of custom phrases to display during loading states. When provided, the CLI will cycle through these phrases instead of the default ones.[]

ide

SettingTypeDescriptionDefault
ide.enabledbooleanEnable IDE integration mode.false
ide.hasSeenNudgebooleanWhether the user has seen the IDE integration nudge.false

privacy

SettingTypeDescriptionDefault
privacy.usageStatisticsEnabledbooleanEnable collection of usage statistics.true

model

SettingTypeDescriptionDefault
model.namestringThe Qwen model to use for conversations.undefined
model.maxSessionTurnsnumberMaximum number of user/model/tool turns to keep in a session. -1 means unlimited.-1
model.summarizeToolOutputobjectEnables or disables the summarization of tool output. You can specify the token budget for the summarization using the tokenBudget setting. Note: Currently only the run_shell_command tool is supported. For example {"run_shell_command": {"tokenBudget": 2000}}undefined
model.generationConfigobjectAdvanced overrides passed to the underlying content generator. Supports request controls such as timeout, maxRetries, and disableCacheControl, along with fine-tuning knobs under samplingParams (for example temperature, top_p, max_tokens). Leave unset to rely on provider defaults.undefined
model.chatCompression.contextPercentageThresholdnumberSets the threshold for chat history compression as a percentage of the model’s total token limit. This is a value between 0 and 1 that applies to both automatic compression and the manual /compress command. For example, a value of 0.6 will trigger compression when the chat history exceeds 60% of the token limit. Use 0 to disable compression entirely.0.7
model.skipNextSpeakerCheckbooleanSkip the next speaker check.false
model.skipLoopDetectionbooleanDisables loop detection checks. Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions.false
model.skipStartupContextbooleanSkips sending the startup workspace context (environment summary and acknowledgement) at the beginning of each session. Enable this if you prefer to provide context manually or want to save tokens on startup.false
model.enableOpenAILoggingbooleanEnables logging of OpenAI API calls for debugging and analysis. When enabled, API requests and responses are logged to JSON files.false
model.openAILoggingDirstringCustom directory path for OpenAI API logs. If not specified, defaults to logs/openai in the current working directory. Supports absolute paths, relative paths (resolved from current working directory), and ~ expansion (home directory).undefined

Example model.generationConfig:

{ "model": { "generationConfig": { "timeout": 60000, "disableCacheControl": false, "samplingParams": { "temperature": 0.2, "top_p": 0.8, "max_tokens": 1024 } } } }

model.openAILoggingDir examples:

  • "~/qwen-logs" - Logs to ~/qwen-logs directory
  • "./custom-logs" - Logs to ./custom-logs relative to current directory
  • "/tmp/openai-logs" - Logs to absolute path /tmp/openai-logs

context

SettingTypeDescriptionDefault
context.fileNamestring or array of stringsThe name of the context file(s).undefined
context.importFormatstringThe format to use when importing memory.undefined
context.discoveryMaxDirsnumberMaximum number of directories to search for memory.200
context.includeDirectoriesarrayAdditional directories to include in the workspace context. Specifies an array of additional absolute or relative paths to include in the workspace context. Missing directories will be skipped with a warning by default. Paths can use ~ to refer to the user’s home directory. This setting can be combined with the --include-directories command-line flag.[]
context.loadFromIncludeDirectoriesbooleanControls the behavior of the /memory refresh command. If set to true, QWEN.md files should be loaded from all directories that are added. If set to false, QWEN.md should only be loaded from the current directory.false
context.fileFiltering.respectGitIgnorebooleanRespect .gitignore files when searching.true
context.fileFiltering.respectQwenIgnorebooleanRespect .qwenignore files when searching.true
context.fileFiltering.enableRecursiveFileSearchbooleanWhether to enable searching recursively for filenames under the current tree when completing @ prefixes in the prompt.true
context.fileFiltering.disableFuzzySearchbooleanWhen true, disables the fuzzy search capabilities when searching for files, which can improve performance on projects with a large number of files.false

Troubleshooting File Search Performance

If you are experiencing performance issues with file searching (e.g., with @ completions), especially in projects with a very large number of files, here are a few things you can try in order of recommendation:

  1. Use .qwenignore: Create a .qwenignore file in your project root to exclude directories that contain a large number of files that you don’t need to reference (e.g., build artifacts, logs, node_modules). Reducing the total number of files crawled is the most effective way to improve performance.
  2. Disable Fuzzy Search: If ignoring files is not enough, you can disable fuzzy search by setting disableFuzzySearch to true in your settings.json file. This will use a simpler, non-fuzzy matching algorithm, which can be faster.
  3. Disable Recursive File Search: As a last resort, you can disable recursive file search entirely by setting enableRecursiveFileSearch to false. This will be the fastest option as it avoids a recursive crawl of your project. However, it means you will need to type the full path to files when using @ completions.

tools

SettingTypeDescriptionDefaultNotes
tools.sandboxboolean or stringSandbox execution environment (can be a boolean or a path string).undefined
tools.shell.enableInteractiveShellbooleanUse node-pty for an interactive shell experience. Fallback to child_process still applies.false
tools.corearray of stringsThis can be used to restrict the set of built-in tools with an allowlist. You can also specify command-specific restrictions for tools that support it, like the run_shell_command tool. For example, "tools.core": ["run_shell_command(ls -l)"] will only allow the ls -l command to be executed.undefined
tools.excludearray of stringsTool names to exclude from discovery. You can also specify command-specific restrictions for tools that support it, like the run_shell_command tool. For example, "tools.exclude": ["run_shell_command(rm -rf)"] will block the rm -rf command. Security Note: Command-specific restrictions in tools.exclude for run_shell_command are based on simple string matching and can be easily bypassed. This feature is not a security mechanism and should not be relied upon to safely execute untrusted code. It is recommended to use tools.core to explicitly select commands that can be executed.undefined
tools.allowedarray of stringsA list of tool names that will bypass the confirmation dialog. This is useful for tools that you trust and use frequently. For example, ["run_shell_command(git)", "run_shell_command(npm test)"] will skip the confirmation dialog to run any git and npm test commands.undefined
tools.approvalModestringSets the default approval mode for tool usage.defaultPossible values: plan (analyze only, do not modify files or execute commands), default (require approval before file edits or shell commands run), auto-edit (automatically approve file edits), yolo (automatically approve all tool calls)
tools.discoveryCommandstringCommand to run for tool discovery.undefined
tools.callCommandstringDefines a custom shell command for calling a specific tool that was discovered using tools.discoveryCommand. The shell command must meet the following criteria: It must take function name (exactly as in function declaration ) as first command line argument. It must read function arguments as JSON on stdin, analogous to functionCall.args. It must return function output as JSON on stdout, analogous to functionResponse.response.content.undefined
tools.useRipgrepbooleanUse ripgrep for file content search instead of the fallback implementation. Provides faster search performance.true
tools.useBuiltinRipgrepbooleanUse the bundled ripgrep binary. When set to false, the system-level rg command will be used instead. This setting is only effective when tools.useRipgrep is true.true
tools.enableToolOutputTruncationbooleanEnable truncation of large tool outputs.trueRequires restart: Yes
tools.truncateToolOutputThresholdnumberTruncate tool output if it is larger than this many characters. Applies to Shell, Grep, Glob, ReadFile and ReadManyFiles tools.25000Requires restart: Yes
tools.truncateToolOutputLinesnumberMaximum lines or entries kept when truncating tool output. Applies to Shell, Grep, Glob, ReadFile and ReadManyFiles tools.1000Requires restart: Yes
tools.autoAcceptbooleanControls whether the CLI automatically accepts and executes tool calls that are considered safe (e.g., read-only operations) without explicit user confirmation. If set to true, the CLI will bypass the confirmation prompt for tools deemed safe.false

mcp

SettingTypeDescriptionDefault
mcp.serverCommandstringCommand to start an MCP server.undefined
mcp.allowedarray of stringsAn allowlist of MCP servers to allow. Allows you to specify a list of MCP server names that should be made available to the model. This can be used to restrict the set of MCP servers to connect to. Note that this will be ignored if --allowed-mcp-server-names is set.undefined
mcp.excludedarray of stringsA denylist of MCP servers to exclude. A server listed in both mcp.excluded and mcp.allowed is excluded. Note that this will be ignored if --allowed-mcp-server-names is set.undefined
Note

Security Note for MCP servers: These settings use simple string matching on MCP server names, which can be modified. If you’re a system administrator looking to prevent users from bypassing this, consider configuring the mcpServers at the system settings level such that the user will not be able to configure any MCP servers of their own. This should not be used as an airtight security mechanism.

security

SettingTypeDescriptionDefault
security.folderTrust.enabledbooleanSetting to track whether Folder trust is enabled.false
security.auth.selectedTypestringThe currently selected authentication type.undefined
security.auth.enforcedTypestringThe required auth type (useful for enterprises).undefined
security.auth.useExternalbooleanWhether to use an external authentication flow.undefined

advanced

SettingTypeDescriptionDefault
advanced.autoConfigureMemorybooleanAutomatically configure Node.js memory limits.false
advanced.dnsResolutionOrderstringThe DNS resolution order.undefined
advanced.excludedEnvVarsarray of stringsEnvironment variables to exclude from project context. Specifies environment variables that should be excluded from being loaded from project .env files. This prevents project-specific environment variables (like DEBUG=true) from interfering with the CLI behavior. Variables from .qwen/.env files are never excluded.["DEBUG","DEBUG_MODE"]
advanced.bugCommandobjectConfiguration for the bug report command. Overrides the default URL for the /bug command. Properties: urlTemplate (string): A URL that can contain {title} and {info} placeholders. Example: "bugCommand": { "urlTemplate": "https://bug.example.com/new?title={title}&info={info}" }undefined
advanced.tavilyApiKeystringAPI key for Tavily web search service. Used to enable the web_search tool functionality.undefined
Note

Note about advanced.tavilyApiKey: This is a legacy configuration format. For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers using the new webSearch configuration format.

mcpServers

Configures connections to one or more Model-Context Protocol (MCP) servers for discovering and using custom tools. Qwen Code attempts to connect to each configured MCP server to discover available tools. If multiple MCP servers expose a tool with the same name, the tool names will be prefixed with the server alias you defined in the configuration (e.g., serverAlias__actualToolName) to avoid conflicts. Note that the system might strip certain schema properties from MCP tool definitions for compatibility. At least one of command, url, or httpUrl must be provided. If multiple are specified, the order of precedence is httpUrl, then url, then command.

PropertyTypeDescriptionOptional
mcpServers.<SERVER_NAME>.commandstringThe command to execute to start the MCP server via standard I/O.Yes
mcpServers.<SERVER_NAME>.argsarray of stringsArguments to pass to the command.Yes
mcpServers.<SERVER_NAME>.envobjectEnvironment variables to set for the server process.Yes
mcpServers.<SERVER_NAME>.cwdstringThe working directory in which to start the server.Yes
mcpServers.<SERVER_NAME>.urlstringThe URL of an MCP server that uses Server-Sent Events (SSE) for communication.Yes
mcpServers.<SERVER_NAME>.httpUrlstringThe URL of an MCP server that uses streamable HTTP for communication.Yes
mcpServers.<SERVER_NAME>.headersobjectA map of HTTP headers to send with requests to url or httpUrl.Yes
mcpServers.<SERVER_NAME>.timeoutnumberTimeout in milliseconds for requests to this MCP server.Yes
mcpServers.<SERVER_NAME>.trustbooleanTrust this server and bypass all tool call confirmations.Yes
mcpServers.<SERVER_NAME>.descriptionstringA brief description of the server, which may be used for display purposes.Yes
mcpServers.<SERVER_NAME>.includeToolsarray of stringsList of tool names to include from this MCP server. When specified, only the tools listed here will be available from this server (allowlist behavior). If not specified, all tools from the server are enabled by default.Yes
mcpServers.<SERVER_NAME>.excludeToolsarray of stringsList of tool names to exclude from this MCP server. Tools listed here will not be available to the model, even if they are exposed by the server. Note: excludeTools takes precedence over includeTools - if a tool is in both lists, it will be excluded.Yes

telemetry

Configures logging and metrics collection for Qwen Code. For more information, see telemetry.

SettingTypeDescriptionDefault
telemetry.enabledbooleanWhether or not telemetry is enabled.
telemetry.targetstringThe destination for collected telemetry. Supported values are local and gcp.
telemetry.otlpEndpointstringThe endpoint for the OTLP Exporter.
telemetry.otlpProtocolstringThe protocol for the OTLP Exporter (grpc or http).
telemetry.logPromptsbooleanWhether or not to include the content of user prompts in the logs.
telemetry.outfilestringThe file to write telemetry to when target is local.
telemetry.useCollectorbooleanWhether to use an external OTLP collector.

Example settings.json

Here is an example of a settings.json file with the nested structure, new as of v0.3.0:

{ "general": { "vimMode": true, "preferredEditor": "code" }, "ui": { "theme": "GitHub", "hideBanner": true, "hideTips": false, "customWittyPhrases": [ "You forget a thousand things every day. Make sure this is one of 'em", "Connecting to AGI" ] }, "tools": { "approvalMode": "yolo", "sandbox": "docker", "discoveryCommand": "bin/get_tools", "callCommand": "bin/call_tool", "exclude": ["write_file"] }, "mcpServers": { "mainServer": { "command": "bin/mcp_server.py" }, "anotherServer": { "command": "node", "args": ["mcp_server.js", "--verbose"] } }, "telemetry": { "enabled": true, "target": "local", "otlpEndpoint": "http://localhost:4317", "logPrompts": true }, "privacy": { "usageStatisticsEnabled": true }, "model": { "name": "qwen3-coder-plus", "maxSessionTurns": 10, "enableOpenAILogging": false, "openAILoggingDir": "~/qwen-logs", "summarizeToolOutput": { "run_shell_command": { "tokenBudget": 100 } } }, "context": { "fileName": ["CONTEXT.md", "QWEN.md"], "includeDirectories": ["path/to/dir1", "~/path/to/dir2", "../path/to/dir3"], "loadFromIncludeDirectories": true, "fileFiltering": { "respectGitIgnore": false } }, "advanced": { "excludedEnvVars": ["DEBUG", "DEBUG_MODE", "NODE_ENV"] } }

Shell History

The CLI keeps a history of shell commands you run. To avoid conflicts between different projects, this history is stored in a project-specific directory within your user’s home folder.

  • Location: ~/.qwen/tmp/<project_hash>/shell_history
    • <project_hash> is a unique identifier generated from your project’s root path.
    • The history is stored in a file named shell_history.

Environment Variables & .env Files

Environment variables are a common way to configure applications, especially for sensitive information (like tokens) or for settings that might change between environments.

Qwen Code can automatically load environment variables from .env files. For authentication-related variables (like OPENAI_*) and the recommended .qwen/.env approach, see Authentication.

Tip

Environment Variable Exclusion: Some environment variables (like DEBUG and DEBUG_MODE) are automatically excluded from project .env files by default to prevent interference with the CLI behavior. Variables from .qwen/.env files are never excluded. You can customize this behavior using the advanced.excludedEnvVarssetting in your settings.json file.

Environment Variables Table

VariableDescriptionNotes
GEMINI_TELEMETRY_ENABLEDSet to true or 1 to enable telemetry. Any other value is treated as disabling it.Overrides the telemetry.enabled setting.
GEMINI_TELEMETRY_TARGETSets the telemetry target (local or gcp).Overrides the telemetry.target setting.
GEMINI_TELEMETRY_OTLP_ENDPOINTSets the OTLP endpoint for telemetry.Overrides the telemetry.otlpEndpoint setting.
GEMINI_TELEMETRY_OTLP_PROTOCOLSets the OTLP protocol (grpc or http).Overrides the telemetry.otlpProtocol setting.
GEMINI_TELEMETRY_LOG_PROMPTSSet to true or 1 to enable or disable logging of user prompts. Any other value is treated as disabling it.Overrides the telemetry.logPrompts setting.
GEMINI_TELEMETRY_OUTFILESets the file path to write telemetry to when the target is local.Overrides the telemetry.outfile setting.
GEMINI_TELEMETRY_USE_COLLECTORSet to true or 1 to enable or disable using an external OTLP collector. Any other value is treated as disabling it.Overrides the telemetry.useCollector setting.
GEMINI_SANDBOXAlternative to the sandbox setting in settings.json.Accepts true, false, docker, podman, or a custom command string.
SEATBELT_PROFILE(macOS specific) Switches the Seatbelt (sandbox-exec) profile on macOS.permissive-open: (Default) Restricts writes to the project folder (and a few other folders, see packages/cli/src/utils/sandbox-macos-permissive-open.sb) but allows other operations. strict: Uses a strict profile that declines operations by default. <profile_name>: Uses a custom profile. To define a custom profile, create a file named sandbox-macos-<profile_name>.sb in your project’s .qwen/ directory (e.g., my-project/.qwen/sandbox-macos-custom.sb).
DEBUG or DEBUG_MODE(often used by underlying libraries or the CLI itself) Set to true or 1 to enable verbose debug logging, which can be helpful for troubleshooting.Note: These variables are automatically excluded from project .env files by default to prevent interference with the CLI behavior. Use .qwen/.env files if you need to set these for Qwen Code specifically.
NO_COLORSet to any value to disable all color output in the CLI.
CLI_TITLESet to a string to customize the title of the CLI.
CODE_ASSIST_ENDPOINTSpecifies the endpoint for the code assist server.This is useful for development and testing.
TAVILY_API_KEYYour API key for the Tavily web search service.Used to enable the web_search tool functionality. Example: export TAVILY_API_KEY="tvly-your-api-key-here"

Command-Line Arguments

Arguments passed directly when running the CLI can override other configurations for that specific session.

Command-Line Arguments Table

ArgumentAliasDescriptionPossible ValuesNotes
--model-mSpecifies the Qwen model to use for this session.Model nameExample: npm start -- --model qwen3-coder-plus
--prompt-pUsed to pass a prompt directly to the command. This invokes Qwen Code in a non-interactive mode.Your prompt textFor scripting examples, use the --output-format json flag to get structured output.
--prompt-interactive-iStarts an interactive session with the provided prompt as the initial input.Your prompt textThe prompt is processed within the interactive session, not before it. Cannot be used when piping input from stdin. Example: qwen -i "explain this code"
--output-format-oSpecifies the format of the CLI output for non-interactive mode.text, json, stream-jsontext: (Default) The standard human-readable output. json: A machine-readable JSON output emitted at the end of execution. stream-json: Streaming JSON messages emitted as they occur during execution. For structured output and scripting, use the --output-format json or --output-format stream-json flag. See Headless Mode for detailed information.
--input-formatSpecifies the format consumed from standard input.text, stream-jsontext: (Default) Standard text input from stdin or command-line arguments. stream-json: JSON message protocol via stdin for bidirectional communication. Requirement: --input-format stream-json requires --output-format stream-json to be set. When using stream-json, stdin is reserved for protocol messages. See Headless Mode for detailed information.
--include-partial-messagesInclude partial assistant messages when using stream-json output format. When enabled, emits stream events (message_start, content_block_delta, etc.) as they occur during streaming.Default: false. Requirement: Requires --output-format stream-json to be set. See Headless Mode for detailed information about stream events.
--sandbox-sEnables sandbox mode for this session.
--sandbox-imageSets the sandbox image URI.
--debug-dEnables debug mode for this session, providing more verbose output.
--all-files-aIf set, recursively includes all files within the current directory as context for the prompt.
--help-hDisplays help information about command-line arguments.
--show-memory-usageDisplays the current memory usage.
--yoloEnables YOLO mode, which automatically approves all tool calls.
--approval-modeSets the approval mode for tool calls.plan, default, auto-edit, yoloSupported modes: plan: Analyze only—do not modify files or execute commands. default: Require approval for file edits or shell commands (default behavior). auto-edit: Automatically approve edit tools (edit, write_file) while prompting for others. yolo: Automatically approve all tool calls (equivalent to --yolo). Cannot be used together with --yolo. Use --approval-mode=yolo instead of --yolo for the new unified approach. Example: qwen --approval-mode auto-edit
See more about Approval Mode.
--allowed-toolsA comma-separated list of tool names that will bypass the confirmation dialog.Tool namesExample: qwen --allowed-tools "Shell(git status)"
--telemetryEnables telemetry.
--telemetry-targetSets the telemetry target.See telemetry for more information.
--telemetry-otlp-endpointSets the OTLP endpoint for telemetry.See telemetry for more information.
--telemetry-otlp-protocolSets the OTLP protocol for telemetry (grpc or http).Defaults to grpc. See telemetry for more information.
--telemetry-log-promptsEnables logging of prompts for telemetry.See telemetry for more information.
--checkpointingEnables checkpointing.
--extensions-eSpecifies a list of extensions to use for the session.Extension namesIf not provided, all available extensions are used. Use the special term qwen -e none to disable all extensions. Example: qwen -e my-extension -e my-other-extension
--list-extensions-lLists all available extensions and exits.
--proxySets the proxy for the CLI.Proxy URLExample: --proxy http://localhost:7890.
--include-directoriesIncludes additional directories in the workspace for multi-directory support.Directory pathsCan be specified multiple times or as comma-separated values. 5 directories can be added at maximum. Example: --include-directories /path/to/project1,/path/to/project2 or --include-directories /path/to/project1 --include-directories /path/to/project2
--screen-readerEnables screen reader mode, which adjusts the TUI for better compatibility with screen readers.
--versionDisplays the version of the CLI.
--openai-loggingEnables logging of OpenAI API calls for debugging and analysis.This flag overrides the enableOpenAILogging setting in settings.json.
--openai-logging-dirSets a custom directory path for OpenAI API logs.Directory pathThis flag overrides the openAILoggingDir setting in settings.json. Supports absolute paths, relative paths, and ~ expansion. Example: qwen --openai-logging-dir "~/qwen-logs" --openai-logging
--tavily-api-keySets the Tavily API key for web search functionality for this session.API keyExample: qwen --tavily-api-key tvly-your-api-key-here

Context Files (Hierarchical Instructional Context)

While not strictly configuration for the CLI’s behavior, context files (defaulting to QWEN.md but configurable via the context.fileName setting) are crucial for configuring the instructional context (also referred to as “memory”). This powerful feature allows you to give project-specific instructions, coding style guides, or any relevant background information to the AI, making its responses more tailored and accurate to your needs. The CLI includes UI elements, such as an indicator in the footer showing the number of loaded context files, to keep you informed about the active context.

  • Purpose: These Markdown files contain instructions, guidelines, or context that you want the Qwen model to be aware of during your interactions. The system is designed to manage this instructional context hierarchically.

Example Context File Content (e.g. QWEN.md)

Here’s a conceptual example of what a context file at the root of a TypeScript project might contain:

# Project: My Awesome TypeScript Library ## General Instructions: - When generating new TypeScript code, please follow the existing coding style. - Ensure all new functions and classes have JSDoc comments. - Prefer functional programming paradigms where appropriate. - All code should be compatible with TypeScript 5.0 and Node.js 20+. ## Coding Style: - Use 2 spaces for indentation. - Interface names should be prefixed with `I` (e.g., `IUserService`). - Private class members should be prefixed with an underscore (`_`). - Always use strict equality (`===` and `!==`). ## Specific Component: `src/api/client.ts` - This file handles all outbound API requests. - When adding new API call functions, ensure they include robust error handling and logging. - Use the existing `fetchWithRetry` utility for all GET requests. ## Regarding Dependencies: - Avoid introducing new external dependencies unless absolutely necessary. - If a new dependency is required, please state the reason.

This example demonstrates how you can provide general project context, specific coding conventions, and even notes about particular files or components. The more relevant and precise your context files are, the better the AI can assist you. Project-specific context files are highly encouraged to establish conventions and context.

  • Hierarchical Loading and Precedence: The CLI implements a sophisticated hierarchical memory system by loading context files (e.g., QWEN.md) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the /memory show command. The typical loading order is:
    1. Global Context File:
      • Location: ~/.qwen/<configured-context-filename> (e.g., ~/.qwen/QWEN.md in your user home directory).
      • Scope: Provides default instructions for all your projects.
    2. Project Root & Ancestors Context Files:
      • Location: The CLI searches for the configured context file in the current working directory and then in each parent directory up to either the project root (identified by a .git folder) or your home directory.
      • Scope: Provides context relevant to the entire project or a significant portion of it.
    3. Sub-directory Context Files (Contextual/Local):
      • Location: The CLI also scans for the configured context file in subdirectories below the current working directory (respecting common ignore patterns like node_modules, .git, etc.). The breadth of this search is limited to 200 directories by default, but can be configured with the context.discoveryMaxDirs setting in your settings.json file.
      • Scope: Allows for highly specific instructions relevant to a particular component, module, or subsection of your project.
  • Concatenation & UI Indication: The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context.
  • Importing Content: You can modularize your context files by importing other Markdown files using the @path/to/file.md syntax. For more details, see the Memory Import Processor documentation.
  • Commands for Memory Management:
    • Use /memory refresh to force a re-scan and reload of all context files from all configured locations. This updates the AI’s instructional context.
    • Use /memory show to display the combined instructional context currently loaded, allowing you to verify the hierarchy and content being used by the AI.
    • See the Commands documentation for full details on the /memory command and its sub-commands (show and refresh).

By understanding and utilizing these configuration layers and the hierarchical nature of context files, you can effectively manage the AI’s memory and tailor Qwen Code’s responses to your specific needs and projects.

Sandbox

Qwen Code can execute potentially unsafe operations (like shell commands and file modifications) within a sandboxed environment to protect your system.

Sandbox is disabled by default, but you can enable it in a few ways:

  • Using --sandbox or -s flag.
  • Setting GEMINI_SANDBOX environment variable.
  • Sandbox is enabled when using --yolo or --approval-mode=yolo by default.

By default, it uses a pre-built qwen-code-sandbox Docker image.

For project-specific sandboxing needs, you can create a custom Dockerfile at .qwen/sandbox.Dockerfile in your project’s root directory. This Dockerfile can be based on the base sandbox image:

FROM qwen-code-sandbox # Add your custom dependencies or configurations here # For example: # RUN apt-get update && apt-get install -y some-package # COPY ./my-config /app/my-config

When .qwen/sandbox.Dockerfile exists, you can use BUILD_SANDBOX environment variable when running Qwen Code to automatically build the custom sandbox image:

BUILD_SANDBOX=1 qwen -s

Usage Statistics

To help us improve Qwen Code, we collect anonymized usage statistics. This data helps us understand how the CLI is used, identify common issues, and prioritize new features.

What we collect:

  • Tool Calls: We log the names of the tools that are called, whether they succeed or fail, and how long they take to execute. We do not collect the arguments passed to the tools or any data returned by them.
  • API Requests: We log the model used for each request, the duration of the request, and whether it was successful. We do not collect the content of the prompts or responses.
  • Session Information: We collect information about the configuration of the CLI, such as the enabled tools and the approval mode.

What we DON’T collect:

  • Personally Identifiable Information (PII): We do not collect any personal information, such as your name, email address, or API keys.
  • Prompt and Response Content: We do not log the content of your prompts or the responses from the model.
  • File Content: We do not log the content of any files that are read or written by the CLI.

How to opt out:

You can opt out of usage statistics collection at any time by setting the usageStatisticsEnabled property to false under the privacy category in your settings.json file:

{ "privacy": { "usageStatisticsEnabled": false } }
Note

When usage statistics are enabled, events are sent to an Alibaba Cloud RUM collection endpoint.

Last updated on