* refactor: replace hardcoded DNS config filename with constant reference
* refactor: remove redundant import of constants in IClashTemp template method
* refactor: add conditional compilation for DEFAULT_REDIR based on OS
* refactor: simplify default TPROXY port handling and remove unused trace_err macro
* refactor: simplify default TPROXY port fallback logic
- Replaced direct log calls with a new logging macro that includes a logging type for better categorization.
- Updated logging in various modules including `merge.rs`, `mod.rs`, `tun.rs`, `clash.rs`, `profile.rs`, `proxy.rs`, `window.rs`, `lightweight.rs`, `guard.rs`, `autostart.rs`, `dirs.rs`, `dns.rs`, `scheme.rs`, `server.rs`, and `window_manager.rs`.
- Introduced logging types such as `Core`, `Network`, `ProxyMode`, `Window`, `Lightweight`, `Service`, and `File` to enhance log clarity and filtering.
* refactor: optimize item handling and improve profile management
* refactor: update IVerge references to use references instead of owned values
* refactor: update patch_verge to use data_ref for improved data handling
* refactor: move handle_copy function to improve resource initialization logic
* refactor: update profile handling to use references for improved memory efficiency
* refactor: simplify get_item method and update profile item retrieval to use string slices
* refactor: update profile validation and patching to use references for improved performance
* refactor: update profile functions to use references for improved performance and memory efficiency
* refactor: update profile patching functions to use references for improved memory efficiency
* refactor: simplify merge function in PrfOption to enhance readability
* refactor: update change_core function to accept a reference for improved memory efficiency
* refactor: update PrfItem and profile functions to use references for improved memory efficiency
* refactor: update resolve_scheme function to accept a reference for improved memory efficiency
* refactor: update resolve_scheme function to accept a string slice for improved flexibility
* refactor: simplify update_profile parameters and logic
* refactor: proxy refresh
* fix(proxy-store): properly hydrate and filter backend provider snapshots
* fix(proxy-store): add monotonic fetch guard and event bridge cleanup
* fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses
* docs: UPDATELOG.md
* fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info
* fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height
* fix(proxy-groups): restrict reduced-height viewport to chain-mode column
* refactor(profiles): introduce a state machine
* refactor:replace state machine with reducer
* refactor:introduce a profile switch worker
* refactor: hooked up a backend-driven profile switch flow
* refactor(profile-switch): serialize switches with async queue and enrich frontend events
* feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles
* chore: translate comments and log messages to English to avoid encoding issues
* refactor: migrate backend queue to SwitchDriver actor
* fix(profile): unify error string types in validation helper
* refactor(profile): make switch driver fully async and handle panics safely
* refactor(cmd): move switch-validation helper into new profile_switch module
* refactor(profile): modularize switch logic into profile_switch.rs
* refactor(profile_switch): modularize switch handler
- Break monolithic switch handler into proper module hierarchy
- Move shared globals, constants, and SwitchScope guard to state.rs
- Isolate queue orchestration and async task spawning in driver.rs
- Consolidate switch pipeline and config patching in workflow.rs
- Extract request pre-checks/YAML validation into validation.rs
* refactor(profile_switch): centralize state management and add cancellation flow
- Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling.
- Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications.
- Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order.
- Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs.
* feat(profile_switch): integrate explicit state machine for profile switching
- workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest.
Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards.
- workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`,
ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches.
- workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine.
- workflow/state_machine.rs:1 introduces a dedicated state machine module.
It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching,
`CoreManager::update_config`, failure rollback, and tray/notification side-effects.
Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop.
* refactor(profile-switch): integrate stage-aware panic handling
- src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1
Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics.
- src-tauri/src/cmd/profile_switch/workflow.rs:25
Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings.
- src-tauri/src/cmd/profile_switch/driver.rs:1
Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics.
* refactor(profile-switch): add watchdog, heartbeat, and async timeout guards
- Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations.
- Add watchdog in driver to cancel stalled switches (5s heartbeat timeout).
- Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls.
- Improve logs for stage transitions and watchdog timeouts to clarify cancellation points.
* refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO
* feat(profile-switch): track cleanup and coordinate pipeline
- Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50)
- Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247)
- Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442)
* feat(profile-switch): unify post-switch cleanup handling
- workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`.
All failure/timeout paths stash post-switch work into a single CleanupHandle.
Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling.
- driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done.
Direct driver-side panics now schedule failure cleanup via the shared helper.
* tmp
* Revert "tmp"
This reverts commit e582cf4a65.
* refactor: queue frontend events through async dispatcher
* refactor: queue frontend switch/proxy events and throttle notices
* chore: frontend debug log
* fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation
- Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages)
- Allows frontend to receive task completion notifications for UI feedback while crash isolation continues
- src-tauri/src/core/handle.rs now only suppresses notify_profile_changed
- Serialized emitter, frontend logging bridge, and other diagnostics unchanged
* refactor: refreshClashData
* refactor(proxy): stabilize proxy switch pipeline and rendering
- Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot
- Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration
- Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts
- Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending)
- Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot
* refactor(profiles): move manual activating logic to reducer for deterministic queue tracking
* refactor: replace proxy-data event bridge with pure polling and simplify proxy store
- Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx).
- Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts).
- Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts).
* refactor(proxy): streamline proxies-updated handling and store event flow
- AppDataProvider now treats `proxies-updated` as the fast path: the listener
calls `applyLiveProxyPayload` immediately and schedules only a single fallback
`fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade).
Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and
the multi-stage queue on profile updates completion was removed
(src/providers/app-data-provider.tsx).
- Rebuilt proxy-store to support the event flow: restored `setLive`, provider
normalization, and an animation-frame + async queue that applies payloads without
blocking. Exposed `applyLiveProxyPayload` so providers can push events directly
into the store (src/stores/proxy-store.ts).
* refactor: switch delay
* refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished
- AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx).
- Retain existing detailed timing logs for monitoring other stages.
- Frontend success notifications remain instant; background refreshes continue asynchronously.
* fix(profiles): prevent duplicate toast on page remount
* refactor(profile-switch): make active switches preemptible and prevent queue piling
- Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82)
- Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232)
- Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301)
- Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208)
* refactor(core): make core reload phase controllable, reduce 0xcfffffff risk
- CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205)
- `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211)
- `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247)
* chore(frontend-logs): downgrade routine event logs from info to debug
- Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…)
- Genuine warnings/errors (failures/timeouts) remain at warn/error
- Core stage logs remain info to keep backend tracking visible
* refactor(frontend-emit): make emit_via_app fire-and-forget async
- `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269)
- Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329)
* refactor(ui): restructure profile switch for event-driven speed + polling stability
- Backend
- SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs)
- `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch
- New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs)
- Notification system
- `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff
- Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs)
- Frontend
- services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents`
- `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers:
- immediate `globalMutate("getProfiles")` to refresh current profile
- background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking)
- forced `mutateSwitchStatus` to correct state
- original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx
- Commands / API cleanup
- removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling
* refactor(frontend): optimize profile switch with optimistic updates
* refactor(profile-switch): switch to event-driven flow with Profile Store
- SwitchManager pushes events; frontend polls get_profile_switch_events
- Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches
- UI flicker removed
* fix(app-data): re-hook profile store updates during switch hydration
* fix(notification): restore frontend event dispatch and non-blocking emits
* fix(app-data-provider): restore proxy refresh and seed snapshot after refactor
* fix: ensure switch completion events are received and handle proxies-updated
* fix(app-data-provider): dedupe switch results by taskId and fix stale profile state
* fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout
* docs: UPDATELOG.md
* chore: add necessary comments
* fix(core): always dispatch async proxy snapshot after RefreshClash event
* fix(proxy-store, provider): handle pending snapshots and proxy profiles
- Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support.
- Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures.
- In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate.
* fix(proxy): re-hook tray refresh events into proxy refresh queue
- Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup.
- Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path.
* fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders
- src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items.
- src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready.
* fix(profile-switch): preserve queued requests and avoid stale connection teardown
- Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario.
- Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419).
* fix(profile-switch, layout): improve profile validation and restore backend refresh
- Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71).
- Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55).
* feat(profile-switch): handle cancellations for superseded requests
- Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482)
- Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581)
- Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20)
* fix(profiles): wrap logging payload for Tauri frontend_log
* fix(profile-switch): add rollback and error propagation for failed persistence
- Added rollback on apply failure so Mihomo restores to the previous profile
before exiting the success path early (state_machine.rs:474).
- Reworked persist_profiles_with_timeout to surface timeout/join/save errors,
convert them into CmdResult failures, and trigger rollback + error propagation
when persistence fails (state_machine.rs:703).
* fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks
* fix(profile-switch): preserve pending queue and surface discarded switches
* fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches
* fix(app-data-provider): restore backend-driven refresh and reattach fallbacks
* fix(profile-switch): queue concurrent updates and add bounded wait/backoff
* fix(proxy): trigger live refresh on app start for proxy snapshot
* refactor(profile-switch): split flow into layers and centralize async cleanup
- Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API.
- Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency.
- Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable.
- Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
refactor(tray): change menu text storage to use Arc<str> for improved performance
refactor(service): utilize SmartString for error messages to enhance memory management
* feat(config): enhance configuration initialization and validation process
* refactor(profile): streamline profile update logic and enhance error handling
* refactor(config): simplify profile item checks and streamline update flag processing
* refactor(disney_plus): add cognitive complexity allowance for check_disney_plus function
* refactor(enhance): restructure configuration and profile item handling for improved clarity and maintainability
* refactor(tray): add cognitive complexity allowance for create_tray_menu function
* refactor(config): add cognitive complexity allowance for patch_config function
* refactor(profiles): simplify item removal logic by introducing take_item_file_by_uid helper function
* refactor(profile): add new validation logic for profile configuration syntax
* refactor(profiles): improve formatting and readability of take_item_file_by_uid function
* refactor(cargo): change cognitive complexity level from warn to deny
* refactor(cargo): ensure cognitive complexity is denied in Cargo.toml
* refactor(i18n): clean up imports and improve code readability
refactor(proxy): simplify system proxy toggle logic
refactor(service): remove unnecessary `as_str()` conversion in error handling
refactor(tray): modularize tray menu creation for better maintainability
* refactor(tray): update menu item text handling to use references for improved performance
In Rust, the `or` and `or_else` methods have distinct behavioral differences. The `or` method always eagerly evaluates its argument and executes any associated function calls. This can lead to unnecessary performance costs—especially in expensive operations like string processing or file handling—and may even trigger unintended side effects.
In contrast, `or_else` evaluates its closure lazily, only when necessary. Introducing a Clippy lint to disallow `or` sacrifices a bit of code simplicity but ensures predictable behavior and enforces lazy evaluation for better performance.
* perf: utilize smartstring for string handling
- Updated various modules to replace standard String with smartstring::alias::String for improved performance and memory efficiency.
- Adjusted string manipulations and conversions throughout the codebase to ensure compatibility with the new smartstring type.
- Enhanced readability and maintainability by using `.into()` for conversions where applicable.
- Ensured that all instances of string handling in configuration, logging, and network management leverage the benefits of smartstring.
* fix: replace wrap_err with stringify_err for better error handling in UWP tool invocation
* refactor: update import path for StringifyErr and adjust string handling in sysopt
* fix: correct import path for CmdResult in UWP module
* fix: update argument type for execute_sysproxy_command to use std::string::String
* fix: add missing CmdResult import in UWP platform module
* fix: improve string handling and error messaging across multiple files
* style: format code for improved readability and consistency across multiple files
* fix: remove unused file
* feat: local backup
* refactor(backup): make local backup helpers synchronous and clean up redundant checks
- Converted local backup helpers to synchronous functions to remove unused async warnings and align command signatures.
- Updated list/delete/export commands to call the sync feature functions directly without awaits while preserving behavior.
- Simplified destination directory creation to always ensure parent folders exist without redundant checks, satisfying Clippy.
- Split the monolithic unlock checker into a module tree (mod.rs:9–133), wiring service-specific tasks while keeping exported Tauri commands untouched.
- Centralize shared data and helpers in types.rs (1–40) and utils.rs (1–21) for reusable timestamp and emoji logic.
- Move each provider’s logic into its own file (bilibili.rs, disney_plus.rs, netflix.rs, etc.), preserving behavior and making future additions or fixes localized.
* feat: update service installation scripts and IPC integration
- Updated `Cargo.toml` to use version 2.0.8 of `clash_verge_service_ipc` with "client" feature.
- Renamed service installation and uninstallation scripts in `post-install.sh` and `pre-remove.sh`.
- Removed `service_ipc` module and refactored IPC handling in `service.rs` to use the new `clash_verge_service_ipc` directly.
- Adjusted service version checking and core management to align with the new IPC structure.
- Simplified directory checks in `dirs.rs` and updated logging configurations in `init.rs`.
- Updated Linux configuration file to reflect new script names.
- Enhanced service installer hook to manage state more effectively.
* refactor: simplify ClashConfig instantiation and remove unused service log file function
* feat: update clash_verge_service_ipc to version 2.0.9 and enhance service initialization logging
* chore: update clash_verge_service_ipc to version 2.0.10 and refactor async service manager initialization
* fix: update clash_verge_service_ipc to version 2.0.11 and improve service manager initialization
* fix: increase sleep duration for socket readiness check to improve stability
* fix: update clash_verge_service_ipc to version 2.0.12 and kode-bridge to version 0.3.4; refactor service management and IPC path checks
* fix: update clash_verge_service_ipc to version 2.0.13; refactor service connection and initialization logic
- Updated logging macros to eliminate the boolean parameter for print control, simplifying the logging calls throughout the codebase.
- Adjusted all logging calls in various modules (lib.rs, lightweight.rs, help.rs, init.rs, logging.rs, resolve/mod.rs, resolve/scheme.rs, resolve/ui.rs, resolve/window.rs, server.rs, singleton.rs, window_manager.rs) to reflect the new macro structure.
- Ensured consistent logging behavior across the application by standardizing the logging format.
* refactor: clash-verge-service management
* fix: correct service state checks in ProxyControlSwitches component
refactor: improve logging in service state update functions
* fix: add missing async handler for Windows and adjust logging import for macOS
* fix: streamline logging imports and add missing async handler for Windows
* refactor: remove unused useServiceStateSync hook and update imports in _layout
* refactor: remove unused useServiceStateSync import and clean up code in ProxyControlSwitches and _layout
* refactor: simplify service status checks and reduce wait time in useServiceInstaller hook
* refactor: remove unnecessary logging statements in service checks and IPC connection
* refactor: extract SwitchRow component for better code organization and readability
* refactor: enhance service state management and update related mutations in layout
* refactor: streamline core stopping logic and improve IPC connection logging
* refactor: consolidate service uninstallation logic and improve error handling
* fix: simplify conditional statements in CoreManager and service functions
* feat: add backoff dependency and implement retry strategy for IPC requests
* refactor: remove redundant Windows conditional and improve error handling in IPC tests
* test: improve error handling in IPC tests for message signing and verification
* fix: adjust IPC backoff retry parameters
* refactor: Remove service state tracking and related logic from service management
* feat: Enhance service status handling with logging and running mode updates
* fix: Improve service status handling with enhanced error logging
* fix: Ensure proper handling of service operations with error propagation
* refactor: Simplify service operation execution and enhance service status handling
* fix: Improve error message formatting in service operation execution and simplify service status retrieval
* refactor: Replace Cache with CacheProxy in multiple modules and update CacheEntry to be generic
* fix: Remove unnecessary success message from config validation
* refactor: Comment out logging statements in service version check and IPC request handling
* feat: update Cargo.toml for 2024 edition and optimize release profiles
* feat: refactor environment variable settings for Linux and improve code organization
* Refactor conditional statements to use `&&` for improved readability
- Updated multiple files to combine nested `if let` statements using `&&` for better clarity and conciseness.
- This change enhances the readability of the code by reducing indentation levels and making the conditions more straightforward.
- Affected files include: media_unlock_checker.rs, profile.rs, clash.rs, profiles.rs, async_proxy_query.rs, core.rs, handle.rs, hotkey.rs, service.rs, timer.rs, tray/mod.rs, merge.rs, seq.rs, config.rs, proxy.rs, window.rs, general.rs, dirs.rs, i18n.rs, init.rs, network.rs, and window.rs in the resolve module.
* refactor: streamline conditional checks using `&&` for improved readability
* fix: update release profile settings for panic behavior and optimization
* fix: adjust optimization level in Cargo.toml and reorder imports in lightweight.rs
* feat: update Cargo.toml for 2024 edition and optimize release profiles
* feat: refactor environment variable settings for Linux and improve code organization
* Refactor conditional statements to use `&&` for improved readability
- Updated multiple files to combine nested `if let` statements using `&&` for better clarity and conciseness.
- This change enhances the readability of the code by reducing indentation levels and making the conditions more straightforward.
- Affected files include: media_unlock_checker.rs, profile.rs, clash.rs, profiles.rs, async_proxy_query.rs, core.rs, handle.rs, hotkey.rs, service.rs, timer.rs, tray/mod.rs, merge.rs, seq.rs, config.rs, proxy.rs, window.rs, general.rs, dirs.rs, i18n.rs, init.rs, network.rs, and window.rs in the resolve module.
* refactor: streamline conditional checks using `&&` for improved readability
* add proxy memu in tray
* 添加win下系统托盘 节点
代理->代理组->nodes
同时添加了对应gui同步
* 添加win 系统托盘显示代理节点
且gui和托盘刷新机制
* rust format
* 添加 win下系统托盘节点延迟
* Squashed commit of the following:
commit 44caaa62c5
Merge: 1916e5393939741a
Author: Junkai W. <129588175+Be-Forever223@users.noreply.github.com>
Date: Sat Aug 30 02:37:07 2025 +0800
Merge branch 'dev' into dev
commit 3939741a06
Author: Tunglies <tunglies.dev@outlook.com>
Date: Sat Aug 30 02:24:47 2025 +0800
refactor: migrate from serde_yaml to serde_yaml_ng for improved YAML handling (#4568)
* refactor: migrate from serde_yaml to serde_yaml_ng for improved YAML handling
* refactor: format code for better readability in DNS configuration
commit f86a1816e0
Author: Tunglies <tunglies.dev@outlook.com>
Date: Sat Aug 30 02:15:34 2025 +0800
chore(deps): update sysinfo to 0.37.0 and zip to 4.5.0 in Cargo.toml (#4564)
* chore(deps): update sysinfo to 0.37.0 and zip to 4.5.0 in Cargo.toml
* chore(deps): remove libnghttp2-sys dependency and update isahc features in Cargo.toml
* chore(deps): remove sysinfo and zip from ignoreDeps in renovate.json
commit 9cbd8b4529
Author: Tunglies <77394545+Tunglies@users.noreply.github.com>
Date: Sat Aug 30 01:30:48 2025 +0800
feat: add x86 OpenSSL installation step for macOS in workflows
commit 5dea73fc2a
Author: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Date: Sat Aug 30 01:21:53 2025 +0800
chore(deps): update npm dependencies (#4542)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
commit 01af1bea23
Author: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Date: Sat Aug 30 01:21:46 2025 +0800
chore(deps): update rust crate reqwest_dav to 0.2.2 (#4554)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
commit 1227e86134
Author: Tunglies <77394545+Tunglies@users.noreply.github.com>
Date: Sat Aug 30 01:12:03 2025 +0800
Remove unnecessary "rustls-tls" feature from reqwest dependency in Cargo.toml
commit c6a6ea48dd
Author: Tunglies <tunglies.dev@outlook.com>
Date: Fri Aug 29 23:51:09 2025 +0800
refactor: enhance async initialization and streamline setup process (#4560)
* feat: Implement DNS management for macOS
- Added `set_public_dns` and `restore_public_dns` functions in `dns.rs` to manage system DNS settings.
- Introduced `resolve` module to encapsulate DNS and scheme resolution functionalities.
- Implemented `resolve_scheme` function in `scheme.rs` to handle deep links and profile imports.
- Created UI readiness management in `ui.rs` to track and update UI loading states.
- Developed window management logic in `window.rs` to handle window creation and visibility.
- Added initial loading overlay script in `window_script.rs` for better user experience during startup.
- Updated server handling in `server.rs` to integrate new resolve functionalities.
- Refactored window creation calls in `window_manager.rs` to use the new window management logic.
* refactor: streamline asynchronous handling in config and resolve setup
* Revert "refactor: streamline asynchronous handling in config and resolve setup"
This reverts commit 23d7dc86d5.
* fix: optimize asynchronous memory handling
* fix: enhance task logging by adding size check for special cases
* refactor: enhance async initialization and streamline setup process
* refactor: optimize async setup by consolidating initialization tasks
* chore: update changelog for Mihomo(Meta) kernel upgrade to v1.19.13
* fix: improve startup phase initialization performance
* refactor: optimize file read/write performance to reduce application wait time
* refactor: simplify app instance exit logic and adjust system proxy guard initialization
* refactor: change resolve_setup_async to synchronous execution for improved performance
* refactor: update resolve_setup_async to accept AppHandle for improved initialization flow
* refactor: remove unnecessary initialization of portable flag in run function
* refactor: consolidate async initialization tasks into a single blocking call for improved execution flow
* refactor: optimize resolve_setup_async by restructuring async tasks for improved concurrency
* refactor: streamline resolve_setup_async and embed_server for improved async handling
* refactor: separate synchronous and asynchronous setup functions for improved clarity
* refactor: simplify async notification handling and remove redundant network manager initialization
* refactor: enhance async handling in proxy request cache and window creation logic
* refactor: improve code formatting and readability in ProxyRequestCache
* refactor: adjust singleton check timeout and optimize trace size conditions
* refactor: update TRACE_SPECIAL_SIZE to include additional size condition
* refactor: update kode-bridge dependency to version 0.2.1-rc2
* refactor: replace RwLock with AtomicBool for UI readiness and implement event-driven monitoring
* refactor: convert async functions to synchronous for window management
* Update src-tauri/src/utils/resolve/window.rs
* fix: handle missing app_handle in create_window function
* Update src-tauri/src/module/lightweight.rs
* format
* feat: Implement DNS management for macOS
- Added `set_public_dns` and `restore_public_dns` functions in `dns.rs` to manage system DNS settings.
- Introduced `resolve` module to encapsulate DNS and scheme resolution functionalities.
- Implemented `resolve_scheme` function in `scheme.rs` to handle deep links and profile imports.
- Created UI readiness management in `ui.rs` to track and update UI loading states.
- Developed window management logic in `window.rs` to handle window creation and visibility.
- Added initial loading overlay script in `window_script.rs` for better user experience during startup.
- Updated server handling in `server.rs` to integrate new resolve functionalities.
- Refactored window creation calls in `window_manager.rs` to use the new window management logic.
* refactor: streamline asynchronous handling in config and resolve setup
* Revert "refactor: streamline asynchronous handling in config and resolve setup"
This reverts commit 23d7dc86d5.
* fix: optimize asynchronous memory handling
* fix: enhance task logging by adding size check for special cases
* refactor: enhance async initialization and streamline setup process
* refactor: optimize async setup by consolidating initialization tasks
* chore: update changelog for Mihomo(Meta) kernel upgrade to v1.19.13
* fix: improve startup phase initialization performance
* refactor: optimize file read/write performance to reduce application wait time
* refactor: simplify app instance exit logic and adjust system proxy guard initialization
* refactor: change resolve_setup_async to synchronous execution for improved performance
* refactor: update resolve_setup_async to accept AppHandle for improved initialization flow
* refactor: remove unnecessary initialization of portable flag in run function
* refactor: consolidate async initialization tasks into a single blocking call for improved execution flow
* refactor: optimize resolve_setup_async by restructuring async tasks for improved concurrency
* refactor: streamline resolve_setup_async and embed_server for improved async handling
* refactor: separate synchronous and asynchronous setup functions for improved clarity
* refactor: simplify async notification handling and remove redundant network manager initialization
* refactor: enhance async handling in proxy request cache and window creation logic
* refactor: improve code formatting and readability in ProxyRequestCache
* refactor: adjust singleton check timeout and optimize trace size conditions
* refactor: update TRACE_SPECIAL_SIZE to include additional size condition
* refactor: update kode-bridge dependency to version 0.2.1-rc2
* refactor: replace RwLock with AtomicBool for UI readiness and implement event-driven monitoring
* refactor: convert async functions to synchronous for window management
* Update src-tauri/src/utils/resolve/window.rs
* fix: handle missing app_handle in create_window function
* Update src-tauri/src/module/lightweight.rs