Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.teeem-ai.com/llms.txt

Use this file to discover all available pages before exploring further.

Coming soon
Next release
macOS and Windows native desktop appsNative apps with the same permissions, audit trail, and PII protections as web chat. System tray presence, global shortcut (Cmd+Shift+T / Ctrl+Shift+T), push notifications, offline cache.Beta access via sales.
2026-05
Latest release
Knowledge Base — document lifecycle v2
  • Three-tier dedup (byte hash → normalised fingerprint → vector similarity 0.92 gate → Gemini verification)
  • Re-uploaded document versions soft-delete the previous version and record a supersession reason
  • Version history panel in the admin console
  • Slack attachments flow into the KB processing queue automatically (/upload-on toggles per channel)
Agent Builder v1
  • Author custom agents, enable tools, and test live at /admin/agents
  • Saved definitions activate immediately — no redeploy
Channels
  • Microsoft Teams generally available
  • KakaoTalk bridge (Korean auto-replies, group-room mirroring to admin)
  • SMS (LMS/MMS) — 1-on-1 alerts and two-way conversation
Localisation
  • English and Japanese added (intake forms, chat, email notifications)
  • Default language per organisation; users can switch mid-conversation
Context efficiency
  • Older tool outputs auto-compressed after 3 turns
  • Automatic compaction at 70% context window
  • LLM-side prompt cache for system prompts → ~75% input cost reduction
2026-04
Confidence gate + proactive skills
Confidence gate
  • When answer confidence is low, the agent asks a clarifying question instead of guessing
  • On by default for every active organisation
Proactive skill generation
  • Every Monday at 09:00 KST the system detects repeated employee work patterns and proposes new skills
  • Slack DM to admins with [Approve] / [Reject] buttons
  • 90-day cooldown + automatic semantic dedup
File processing
  • Safe ZIP extraction (depth limit, ratio guard)
  • Vertex service-account auth for Gemini Files API multimodal calls
  • Files larger than 8 MB process asynchronously with chat-side completion push
2026-03
Vertex Gemini 3 standardisation
LLM provider consolidation
  • All LLM traffic standardised on Google Cloud Vertex AI Gemini
  • Simple / medium queries: Gemini 3.1 Flash Lite
  • Complex work: Gemini 3 Flash Preview
  • Conversation summarisation: Groq (helper)
Security
  • AI Basic Act Article 27 watermarking on by default
  • Korean prompt-injection detection patterns hardened
  • Compliance evidence package — exportable in one click

Want more detail?

Talk to sales

See how a release fits your organisation. Sales walks you through it.