@ -0,0 +1,19 @@ | |||||
.rebar3 | |||||
_* | |||||
.eunit | |||||
*.o | |||||
*.beam | |||||
*.plt | |||||
*.swp | |||||
*.swo | |||||
.erlang.cookie | |||||
ebin | |||||
log | |||||
erl_crash.dump | |||||
.rebar | |||||
logs | |||||
_build | |||||
.idea | |||||
*.iml | |||||
rebar3.crashdump | |||||
*~ |
@ -0,0 +1,191 @@ | |||||
Apache License | |||||
Version 2.0, January 2004 | |||||
http://www.apache.org/licenses/ | |||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION | |||||
1. Definitions. | |||||
"License" shall mean the terms and conditions for use, reproduction, | |||||
and distribution as defined by Sections 1 through 9 of this document. | |||||
"Licensor" shall mean the copyright owner or entity authorized by | |||||
the copyright owner that is granting the License. | |||||
"Legal Entity" shall mean the union of the acting entity and all | |||||
other entities that control, are controlled by, or are under common | |||||
control with that entity. For the purposes of this definition, | |||||
"control" means (i) the power, direct or indirect, to cause the | |||||
direction or management of such entity, whether by contract or | |||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the | |||||
outstanding shares, or (iii) beneficial ownership of such entity. | |||||
"You" (or "Your") shall mean an individual or Legal Entity | |||||
exercising permissions granted by this License. | |||||
"Source" form shall mean the preferred form for making modifications, | |||||
including but not limited to software source code, documentation | |||||
source, and configuration files. | |||||
"Object" form shall mean any form resulting from mechanical | |||||
transformation or translation of a Source form, including but | |||||
not limited to compiled object code, generated documentation, | |||||
and conversions to other media types. | |||||
"Work" shall mean the work of authorship, whether in Source or | |||||
Object form, made available under the License, as indicated by a | |||||
copyright notice that is included in or attached to the work | |||||
(an example is provided in the Appendix below). | |||||
"Derivative Works" shall mean any work, whether in Source or Object | |||||
form, that is based on (or derived from) the Work and for which the | |||||
editorial revisions, annotations, elaborations, or other modifications | |||||
represent, as a whole, an original work of authorship. For the purposes | |||||
of this License, Derivative Works shall not include works that remain | |||||
separable from, or merely link (or bind by name) to the interfaces of, | |||||
the Work and Derivative Works thereof. | |||||
"Contribution" shall mean any work of authorship, including | |||||
the original version of the Work and any modifications or additions | |||||
to that Work or Derivative Works thereof, that is intentionally | |||||
submitted to Licensor for inclusion in the Work by the copyright owner | |||||
or by an individual or Legal Entity authorized to submit on behalf of | |||||
the copyright owner. For the purposes of this definition, "submitted" | |||||
means any form of electronic, verbal, or written communication sent | |||||
to the Licensor or its representatives, including but not limited to | |||||
communication on electronic mailing lists, source code control systems, | |||||
and issue tracking systems that are managed by, or on behalf of, the | |||||
Licensor for the purpose of discussing and improving the Work, but | |||||
excluding communication that is conspicuously marked or otherwise | |||||
designated in writing by the copyright owner as "Not a Contribution." | |||||
"Contributor" shall mean Licensor and any individual or Legal Entity | |||||
on behalf of whom a Contribution has been received by Licensor and | |||||
subsequently incorporated within the Work. | |||||
2. Grant of Copyright License. Subject to the terms and conditions of | |||||
this License, each Contributor hereby grants to You a perpetual, | |||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable | |||||
copyright license to reproduce, prepare Derivative Works of, | |||||
publicly display, publicly perform, sublicense, and distribute the | |||||
Work and such Derivative Works in Source or Object form. | |||||
3. Grant of Patent License. Subject to the terms and conditions of | |||||
this License, each Contributor hereby grants to You a perpetual, | |||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable | |||||
(except as stated in this section) patent license to make, have made, | |||||
use, offer to sell, sell, import, and otherwise transfer the Work, | |||||
where such license applies only to those patent claims licensable | |||||
by such Contributor that are necessarily infringed by their | |||||
Contribution(s) alone or by combination of their Contribution(s) | |||||
with the Work to which such Contribution(s) was submitted. If You | |||||
institute patent litigation against any entity (including a | |||||
cross-claim or counterclaim in a lawsuit) alleging that the Work | |||||
or a Contribution incorporated within the Work constitutes direct | |||||
or contributory patent infringement, then any patent licenses | |||||
granted to You under this License for that Work shall terminate | |||||
as of the date such litigation is filed. | |||||
4. Redistribution. You may reproduce and distribute copies of the | |||||
Work or Derivative Works thereof in any medium, with or without | |||||
modifications, and in Source or Object form, provided that You | |||||
meet the following conditions: | |||||
(a) You must give any other recipients of the Work or | |||||
Derivative Works a copy of this License; and | |||||
(b) You must cause any modified files to carry prominent notices | |||||
stating that You changed the files; and | |||||
(c) You must retain, in the Source form of any Derivative Works | |||||
that You distribute, all copyright, patent, trademark, and | |||||
attribution notices from the Source form of the Work, | |||||
excluding those notices that do not pertain to any part of | |||||
the Derivative Works; and | |||||
(d) If the Work includes a "NOTICE" text file as part of its | |||||
distribution, then any Derivative Works that You distribute must | |||||
include a readable copy of the attribution notices contained | |||||
within such NOTICE file, excluding those notices that do not | |||||
pertain to any part of the Derivative Works, in at least one | |||||
of the following places: within a NOTICE text file distributed | |||||
as part of the Derivative Works; within the Source form or | |||||
documentation, if provided along with the Derivative Works; or, | |||||
within a display generated by the Derivative Works, if and | |||||
wherever such third-party notices normally appear. The contents | |||||
of the NOTICE file are for informational purposes only and | |||||
do not modify the License. You may add Your own attribution | |||||
notices within Derivative Works that You distribute, alongside | |||||
or as an addendum to the NOTICE text from the Work, provided | |||||
that such additional attribution notices cannot be construed | |||||
as modifying the License. | |||||
You may add Your own copyright statement to Your modifications and | |||||
may provide additional or different license terms and conditions | |||||
for use, reproduction, or distribution of Your modifications, or | |||||
for any such Derivative Works as a whole, provided Your use, | |||||
reproduction, and distribution of the Work otherwise complies with | |||||
the conditions stated in this License. | |||||
5. Submission of Contributions. Unless You explicitly state otherwise, | |||||
any Contribution intentionally submitted for inclusion in the Work | |||||
by You to the Licensor shall be under the terms and conditions of | |||||
this License, without any additional terms or conditions. | |||||
Notwithstanding the above, nothing herein shall supersede or modify | |||||
the terms of any separate license agreement you may have executed | |||||
with Licensor regarding such Contributions. | |||||
6. Trademarks. This License does not grant permission to use the trade | |||||
names, trademarks, service marks, or product names of the Licensor, | |||||
except as required for reasonable and customary use in describing the | |||||
origin of the Work and reproducing the content of the NOTICE file. | |||||
7. Disclaimer of Warranty. Unless required by applicable law or | |||||
agreed to in writing, Licensor provides the Work (and each | |||||
Contributor provides its Contributions) on an "AS IS" BASIS, | |||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or | |||||
implied, including, without limitation, any warranties or conditions | |||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A | |||||
PARTICULAR PURPOSE. You are solely responsible for determining the | |||||
appropriateness of using or redistributing the Work and assume any | |||||
risks associated with Your exercise of permissions under this License. | |||||
8. Limitation of Liability. In no event and under no legal theory, | |||||
whether in tort (including negligence), contract, or otherwise, | |||||
unless required by applicable law (such as deliberate and grossly | |||||
negligent acts) or agreed to in writing, shall any Contributor be | |||||
liable to You for damages, including any direct, indirect, special, | |||||
incidental, or consequential damages of any character arising as a | |||||
result of this License or out of the use or inability to use the | |||||
Work (including but not limited to damages for loss of goodwill, | |||||
work stoppage, computer failure or malfunction, or any and all | |||||
other commercial damages or losses), even if such Contributor | |||||
has been advised of the possibility of such damages. | |||||
9. Accepting Warranty or Additional Liability. While redistributing | |||||
the Work or Derivative Works thereof, You may choose to offer, | |||||
and charge a fee for, acceptance of support, warranty, indemnity, | |||||
or other liability obligations and/or rights consistent with this | |||||
License. However, in accepting such obligations, You may act only | |||||
on Your own behalf and on Your sole responsibility, not on behalf | |||||
of any other Contributor, and only if You agree to indemnify, | |||||
defend, and hold each Contributor harmless for any liability | |||||
incurred by, or claims asserted against, such Contributor by reason | |||||
of your accepting any such warranty or additional liability. | |||||
END OF TERMS AND CONDITIONS | |||||
Copyright 2025, SisMaker <1713699517@qq.com>. | |||||
Licensed under the Apache License, Version 2.0 (the "License"); | |||||
you may not use this file except in compliance with the License. | |||||
You may obtain a copy of the License at | |||||
http://www.apache.org/licenses/LICENSE-2.0 | |||||
Unless required by applicable law or agreed to in writing, software | |||||
distributed under the License is distributed on an "AS IS" BASIS, | |||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |||||
See the License for the specific language governing permissions and | |||||
limitations under the License. | |||||
@ -0,0 +1,31 @@ | |||||
# 斗地主AI项目完整性分析报告 | |||||
**分析时间:** 2025-02-21 04:57:32 UTC | |||||
**分析人:** SisMaker | |||||
**项目版本:** 2.0.0 | |||||
## 核心功能完整性评估 | |||||
### 1. 基础游戏系统 (完成度:95%) | |||||
✅ 游戏规则引擎 | |||||
✅ 牌型判断系统 | |||||
✅ 玩家管理 | |||||
✅ 游戏流程控制 | |||||
### 2. AI决策系统 (完成度:90%) | |||||
✅ 基础决策逻辑 | |||||
✅ 深度学习模型 | |||||
✅ 对手建模 | |||||
✅ 策略优化器 | |||||
### 3. 学习系统 (完成度:85%) | |||||
✅ 经验积累机制 | |||||
✅ 模型训练系统 | |||||
✅ 在线学习能力 | |||||
❌ 分布式学习支持 | |||||
### 4. 性能优化 (完成度:80%) | |||||
✅ 基础性能优化 | |||||
✅ 内存管理 | |||||
❌ 完整的性能测试 | |||||
❌ 负载均衡 |
@ -0,0 +1,178 @@ | |||||
cardSrv | |||||
===== | |||||
An OTP application | |||||
Build | |||||
----- | |||||
$ rebar3 compile | |||||
# 自动斗地主AI系统项目文档 | |||||
## 项目概述 | |||||
本项目是一个基于Erlang开发的智能斗地主游戏系统,集成了深度学习、并行计算、性能监控和可视化分析等先进功能。系统采用模块化设计,具有高可扩展性和可维护性。 | |||||
## 系统架构 | |||||
### 核心模块 | |||||
1. **游戏核心模块** | |||||
- cards.erl: 牌类操作 | |||||
- card_rules.erl: 游戏规则 | |||||
- game_server.erl: 游戏服务器 | |||||
- player.erl: 玩家管理 | |||||
2. **AI系统模块** | |||||
- deep_learning.erl: 深度学习引擎 | |||||
- advanced_ai_player.erl: 高级AI玩家 | |||||
- matrix.erl: 矩阵运算 | |||||
- optimizer.erl: 优化器 | |||||
3. **系统支持模块** | |||||
- parallel_compute.erl: 并行计算 | |||||
- performance_monitor.erl: 性能监控 | |||||
- visualization.erl: 可视化分析 | |||||
## 功能特性 | |||||
### 1. 基础游戏功能 | |||||
- 完整的斗地主规则实现 | |||||
- 多人游戏支持 | |||||
- 房间管理系统 | |||||
- 积分系统 | |||||
### 2. AI系统 | |||||
#### 2.1 深度学习功能 | |||||
- 多层神经网络 | |||||
- 多种优化器(Adam, SGD) | |||||
- 实时学习能力 | |||||
- 策略适应 | |||||
#### 2.2 AI玩家特性 | |||||
- 多种性格特征(激进、保守、平衡、自适应) | |||||
- 动态决策系统 | |||||
- 对手模式识别 | |||||
- 自适应学习 | |||||
### 3. 系统性能 | |||||
#### 3.1 并行计算 | |||||
- 工作进程池管理 | |||||
- 负载均衡 | |||||
- 异步处理 | |||||
- 结果聚合 | |||||
#### 3.2 性能监控 | |||||
- 实时性能指标收集 | |||||
- 自动化性能分析 | |||||
- 告警系统 | |||||
- 性能报告生成 | |||||
### 4. 可视化分析 | |||||
- 多种图表类型支持 | |||||
- 实时数据更新 | |||||
- 多格式导出 | |||||
- 自定义显示选项 | |||||
## 技术实现 | |||||
### 1. 深度学习实现 | |||||
```erlang | |||||
% 示例:创建神经网络 | |||||
NetworkConfig = [64, 128, 64, 32], | |||||
{ok, Network} = deep_learning:create_network(NetworkConfig). | |||||
``` | |||||
### 2. 并行处理 | |||||
```erlang | |||||
% 示例:并行预测 | |||||
Inputs = [Input1, Input2, Input3], | |||||
{ok, Results} = parallel_compute:parallel_predict(Inputs, Network). | |||||
``` | |||||
### 3. 性能监控 | |||||
```erlang | |||||
% 示例:启动监控 | |||||
{ok, MonitorId} = performance_monitor:start_monitoring(Network). | |||||
``` | |||||
## 系统要求 | |||||
- Erlang/OTP 21+ | |||||
- 支持并行计算的多核系统 | |||||
- 足够的内存支持深度学习运算 | |||||
- 图形库支持(用于可视化) | |||||
## 性能指标 | |||||
- 支持同时运行多个游戏房间 | |||||
- AI决策响应时间 < 1秒 | |||||
- 支持实时性能监控和分析 | |||||
- 可扩展到分布式系统 | |||||
## 已实现功能列表 | |||||
### 游戏核心功能 | |||||
- [x] 完整的斗地主规则实现 | |||||
- [x] 多玩家支持 | |||||
- [x] 房间管理 | |||||
- [x] 积分系统 | |||||
### AI功能 | |||||
- [x] 深度学习引擎 | |||||
- [x] 多种AI性格 | |||||
- [x] 自适应学习 | |||||
- [x] 策略优化 | |||||
### 系统功能 | |||||
- [x] 并行计算 | |||||
- [x] 性能监控 | |||||
- [x] 可视化分析 | |||||
- [x] 实时数据处理 | |||||
## 待优化功能 | |||||
1. 分布式系统支持 | |||||
2. 数据持久化 | |||||
3. 更多AI算法 | |||||
4. Web界面 | |||||
5. 移动端支持 | |||||
6. 安全性增强 | |||||
7. 容错机制 | |||||
8. 日志系统 | |||||
## 使用说明 | |||||
### 1. 启动系统 | |||||
```erlang | |||||
% 运行测试 | |||||
ai_test:run_test(). | |||||
``` | |||||
### 2. 创建游戏房间 | |||||
```erlang | |||||
{ok, RoomId} = room_manager:create_room("新手房", PlayerPid). | |||||
``` | |||||
### 3. 添加AI玩家 | |||||
```erlang | |||||
{ok, AiPlayer} = advanced_ai_player:start_link("AI_Player", aggressive). | |||||
``` | |||||
## 错误处理 | |||||
系统实现了基本的错误处理机制: | |||||
- 游戏异常处理 | |||||
- AI系统容错 | |||||
- 并行计算错误恢复 | |||||
- 性能监控告警 | |||||
## 维护建议 | |||||
1. 定期检查性能监控报告 | |||||
2. 更新AI模型训练数据 | |||||
3. 优化并行计算配置 | |||||
4. 备份系统数据 | |||||
@ -0,0 +1,64 @@ | |||||
%%% 游戏系统记录定义 | |||||
%%% Created: 2025-02-21 05:01:23 UTC | |||||
%%% Author: SisMaker | |||||
%% 游戏状态记录 | |||||
-record(game_state, { | |||||
players = [], % [{Pid, Cards, Role}] | |||||
current_player, % Pid | |||||
last_play = [], % {Pid, Cards} | |||||
played_cards = [], % [{Pid, Cards}] | |||||
stage = waiting, % waiting | playing | finished | |||||
landlord_cards = [] % 地主牌 | |||||
}). | |||||
%% AI状态记录 | |||||
-record(ai_state, { | |||||
strategy_model, % 策略模型 | |||||
learning_model, % 学习模型 | |||||
opponent_model, % 对手模型 | |||||
personality, % aggressive | conservative | balanced | |||||
performance_stats = [] % 性能统计 | |||||
}). | |||||
%% 学习系统状态记录 | |||||
-record(learning_state, { | |||||
neural_network, % 深度神经网络模型 | |||||
experience_buffer, % 经验回放缓冲 | |||||
model_version, % 模型版本 | |||||
training_stats % 训练统计 | |||||
}). | |||||
%% 对手模型记录 | |||||
-record(opponent_model, { | |||||
play_patterns = #{}, % 出牌模式统计 | |||||
card_preferences = #{}, % 牌型偏好 | |||||
risk_profile = 0.5, % 风险偏好 | |||||
skill_rating = 500, % 技能评分 | |||||
play_history = [] % 历史出牌记录 | |||||
}). | |||||
%% 策略状态记录 | |||||
-record(strategy_state, { | |||||
current_strategy, % 当前策略 | |||||
performance_metrics, % 性能指标 | |||||
adaptation_rate, % 适应率 | |||||
optimization_history % 优化历史 | |||||
}). | |||||
%% 游戏管理器状态记录 | |||||
-record(game_manager_state, { | |||||
game_id, % 游戏ID | |||||
players, % 玩家列表 | |||||
ai_players, % AI玩家 | |||||
current_state, % 当前游戏状态 | |||||
history % 游戏历史 | |||||
}). | |||||
%% 牌型记录 | |||||
-record(card_pattern, { | |||||
type, % single | pair | triple | straight | bomb | rocket | |||||
value, % 主牌值 | |||||
length = 1, % 顺子长度 | |||||
extra = [] % 附加牌 | |||||
}). |
@ -0,0 +1,7 @@ | |||||
{erl_opts, [debug_info]}. | |||||
{deps, []}. | |||||
{shell, [ | |||||
% {config, "config/sys.config"}, | |||||
{apps, [cardSrv]} | |||||
]}. |
@ -0,0 +1,184 @@ | |||||
-module(advanced_ai_player). | |||||
-behaviour(gen_server). | |||||
-export([start_link/2, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([play_turn/1, get_stats/1]). | |||||
-record(state, { | |||||
name, | |||||
personality, % aggressive | conservative | balanced | adaptive | |||||
learning_model, % 机器学习模型引用 | |||||
game_history = [], | |||||
play_stats = #{}, | |||||
current_game = undefined, | |||||
adaptation_level = 0.0 | |||||
}). | |||||
%% 性格特征定义 | |||||
-define(PERSONALITY_TRAITS, #{ | |||||
aggressive => #{ | |||||
risk_tolerance => 0.8, | |||||
combo_preference => 0.7, | |||||
control_value => 0.6 | |||||
}, | |||||
conservative => #{ | |||||
risk_tolerance => 0.3, | |||||
combo_preference => 0.4, | |||||
control_value => 0.8 | |||||
}, | |||||
balanced => #{ | |||||
risk_tolerance => 0.5, | |||||
combo_preference => 0.5, | |||||
control_value => 0.5 | |||||
}, | |||||
adaptive => #{ | |||||
risk_tolerance => 0.5, | |||||
combo_preference => 0.5, | |||||
control_value => 0.5 | |||||
} | |||||
}). | |||||
%% API | |||||
start_link(Name, Personality) -> | |||||
gen_server:start_link(?MODULE, [Name, Personality], []). | |||||
play_turn(Pid) -> | |||||
gen_server:cast(Pid, play_turn). | |||||
get_stats(Pid) -> | |||||
gen_server:call(Pid, get_stats). | |||||
%% Callbacks | |||||
init([Name, Personality]) -> | |||||
{ok, LearningModel} = ml_engine:start_link(), | |||||
{ok, #state{ | |||||
name = Name, | |||||
personality = Personality, | |||||
learning_model = LearningModel | |||||
}}. | |||||
handle_cast(play_turn, State) -> | |||||
{Play, NewState} = calculate_best_move(State), | |||||
execute_play(Play, NewState), | |||||
{noreply, NewState}. | |||||
handle_call(get_stats, _From, State) -> | |||||
Stats = compile_statistics(State), | |||||
{reply, {ok, Stats}, State}. | |||||
%% 高级AI策略实现 | |||||
calculate_best_move(State) -> | |||||
GameState = analyze_game_state(State), | |||||
Personality = get_personality_traits(State), | |||||
% 获取各种可能的动作 | |||||
PossiblePlays = generate_possible_plays(State), | |||||
% 使用机器学习模型评估每个动作 | |||||
RatedPlays = evaluate_plays(PossiblePlays, GameState, State), | |||||
% 根据性格特征调整评分 | |||||
AdjustedPlays = adjust_by_personality(RatedPlays, Personality, State), | |||||
% 选择最佳动作 | |||||
BestPlay = select_best_play(AdjustedPlays, State), | |||||
% 更新状态和学习模型 | |||||
NewState = update_state_and_learn(State, BestPlay, GameState), | |||||
{BestPlay, NewState}. | |||||
analyze_game_state(State) -> | |||||
% 分析当前游戏状态 | |||||
#{ | |||||
cards_in_hand => get_cards_in_hand(State), | |||||
cards_played => get_cards_played(State), | |||||
opponent_info => analyze_opponents(State), | |||||
game_stage => determine_game_stage(State), | |||||
control_status => analyze_control(State) | |||||
}. | |||||
generate_possible_plays(State) -> | |||||
Cards = get_cards_in_hand(State), | |||||
LastPlay = get_last_play(State), | |||||
% 生成所有可能的牌组合 | |||||
AllCombinations = card_rules:generate_combinations(Cards), | |||||
% 过滤出合法的出牌选择 | |||||
ValidPlays = filter_valid_plays(AllCombinations, LastPlay), | |||||
% 添加"不出"选项 | |||||
[pass | ValidPlays]. | |||||
evaluate_plays(Plays, GameState, State) -> | |||||
lists:map( | |||||
fun(Play) -> | |||||
Score = evaluate_single_play(Play, GameState, State), | |||||
{Play, Score} | |||||
end, | |||||
Plays | |||||
). | |||||
evaluate_single_play(Play, GameState, State) -> | |||||
% 使用机器学习模型评估 | |||||
Features = extract_features(Play, GameState), | |||||
{ok, BaseScore} = ml_engine:predict(State#state.learning_model, Features), | |||||
% 考虑多个因素 | |||||
ControlScore = evaluate_control_value(Play, GameState), | |||||
TempoScore = evaluate_tempo_value(Play, GameState), | |||||
RiskScore = evaluate_risk_value(Play, GameState), | |||||
% 综合评分 | |||||
BaseScore * 0.4 + ControlScore * 0.3 + TempoScore * 0.2 + RiskScore * 0.1. | |||||
adjust_by_personality(RatedPlays, Personality, State) -> | |||||
RiskTolerance = maps:get(risk_tolerance, Personality), | |||||
ComboPreference = maps:get(combo_preference, Personality), | |||||
ControlValue = maps:get(control_value, Personality), | |||||
lists:map( | |||||
fun({Play, Score}) -> | |||||
AdjustedScore = adjust_score_by_traits(Score, Play, RiskTolerance, | |||||
ComboPreference, ControlValue, State), | |||||
{Play, AdjustedScore} | |||||
end, | |||||
RatedPlays | |||||
). | |||||
select_best_play(AdjustedPlays, State) -> | |||||
% 根据评分选择最佳动作,但加入一些随机性以避免过于可预测 | |||||
case State#state.personality of | |||||
adaptive -> | |||||
select_adaptive_play(AdjustedPlays, State); | |||||
_ -> | |||||
select_personality_based_play(AdjustedPlays, State) | |||||
end. | |||||
update_state_and_learn(State, Play, GameState) -> | |||||
% 记录动作 | |||||
NewHistory = [Play | State#state.game_history], | |||||
% 更新统计信息 | |||||
NewStats = update_play_stats(State#state.play_stats, Play), | |||||
% 如果是自适应性格,更新适应级别 | |||||
NewAdaptationLevel = case State#state.personality of | |||||
adaptive -> | |||||
update_adaptation_level(State#state.adaptation_level, Play, GameState); | |||||
_ -> | |||||
State#state.adaptation_level | |||||
end, | |||||
% 更新机器学习模型 | |||||
Features = extract_features(Play, GameState), | |||||
Reward = calculate_play_reward(Play, GameState), | |||||
ml_engine:update_model(State#state.learning_model, Features, Reward), | |||||
State#state{ | |||||
game_history = NewHistory, | |||||
play_stats = NewStats, | |||||
adaptation_level = NewAdaptationLevel | |||||
}. |
@ -0,0 +1,203 @@ | |||||
-module(advanced_ai_strategy). | |||||
-export([init_strategy/0, analyze_situation/2, make_decision/2, learn_from_game/2]). | |||||
-record(advanced_ai_state, { | |||||
strategy_model, % 策略模型 | |||||
situation_model, % 局势分析模型 | |||||
learning_model, % 学习模型 | |||||
pattern_database, % 牌型数据库 | |||||
opponent_models, % 对手模型 | |||||
game_history = [] % 游戏历史 | |||||
}). | |||||
%% 高级策略初始化 | |||||
init_strategy() -> | |||||
#advanced_ai_state{ | |||||
strategy_model = init_strategy_model(), | |||||
situation_model = init_situation_model(), | |||||
learning_model = init_learning_model(), | |||||
pattern_database = init_pattern_database(), | |||||
opponent_models = #{} | |||||
}. | |||||
%% 更复杂的局势分析 | |||||
analyze_situation(State, GameState) -> | |||||
BaseAnalysis = basic_situation_analysis(GameState), | |||||
OpponentAnalysis = analyze_opponents(State, GameState), | |||||
PatternAnalysis = analyze_card_patterns(State, GameState), | |||||
WinProbability = calculate_win_probability(State, BaseAnalysis, OpponentAnalysis), | |||||
#{ | |||||
base_analysis => BaseAnalysis, | |||||
opponent_analysis => OpponentAnalysis, | |||||
pattern_analysis => PatternAnalysis, | |||||
win_probability => WinProbability, | |||||
suggested_strategies => suggest_strategies(State, WinProbability) | |||||
}. | |||||
%% 高级决策系统 | |||||
make_decision(State, GameState) -> | |||||
% 获取当前局势分析 | |||||
SituationAnalysis = analyze_situation(State, GameState), | |||||
% 生成所有可能的行动 | |||||
PossibleActions = generate_possible_actions(GameState), | |||||
% 使用蒙特卡洛树搜索评估行动 | |||||
EvaluatedActions = monte_carlo_tree_search(State, PossibleActions, GameState), | |||||
% 应用强化学习模型 | |||||
RefinedActions = apply_reinforcement_learning(State, EvaluatedActions), | |||||
% 选择最佳行动 | |||||
select_best_action(RefinedActions, SituationAnalysis). | |||||
%% 增强学习功能 | |||||
learn_from_game(State, GameRecord) -> | |||||
% 更新对手模型 | |||||
UpdatedOpponentModels = update_opponent_models(State, GameRecord), | |||||
% 更新策略模型 | |||||
UpdatedStrategyModel = update_strategy_model(State, GameRecord), | |||||
% 更新牌型数据库 | |||||
UpdatedPatternDB = update_pattern_database(State, GameRecord), | |||||
% 应用深度学习更新 | |||||
apply_deep_learning_update(State, GameRecord), | |||||
State#advanced_ai_state{ | |||||
strategy_model = UpdatedStrategyModel, | |||||
opponent_models = UpdatedOpponentModels, | |||||
pattern_database = UpdatedPatternDB | |||||
}. | |||||
%% 内部函数 | |||||
%% 基础局势分析 | |||||
basic_situation_analysis(GameState) -> | |||||
#{ | |||||
hand_strength => evaluate_hand_strength(GameState), | |||||
control_level => evaluate_control_level(GameState), | |||||
game_stage => determine_game_stage(GameState), | |||||
remaining_key_cards => analyze_remaining_key_cards(GameState) | |||||
}. | |||||
%% 对手分析 | |||||
analyze_opponents(State, GameState) -> | |||||
OpponentModels = State#advanced_ai_state.opponent_models, | |||||
lists:map( | |||||
fun(Opponent) -> | |||||
Model = maps:get(Opponent, OpponentModels, create_new_opponent_model()), | |||||
analyze_single_opponent(Model, Opponent, GameState) | |||||
end, | |||||
get_opponents(GameState) | |||||
). | |||||
%% 牌型分析 | |||||
analyze_card_patterns(State, GameState) -> | |||||
PatternDB = State#advanced_ai_state.pattern_database, | |||||
CurrentHand = get_current_hand(GameState), | |||||
#{ | |||||
available_patterns => find_available_patterns(CurrentHand, PatternDB), | |||||
pattern_strength => evaluate_pattern_strength(CurrentHand, PatternDB), | |||||
combo_opportunities => identify_combo_opportunities(CurrentHand, PatternDB) | |||||
}. | |||||
%% 蒙特卡洛树搜索 | |||||
monte_carlo_tree_search(State, Actions, GameState) -> | |||||
MaxIterations = 1000, | |||||
lists:map( | |||||
fun(Action) -> | |||||
Score = run_mcts_simulation(State, Action, GameState, MaxIterations), | |||||
{Action, Score} | |||||
end, | |||||
Actions | |||||
). | |||||
%% MCTS模拟 | |||||
run_mcts_simulation(State, Action, GameState, MaxIterations) -> | |||||
Root = create_mcts_node(GameState, Action), | |||||
lists:foldl( | |||||
fun(_, Score) -> | |||||
SimulationResult = simulate_game(State, Root), | |||||
update_mcts_statistics(Root, SimulationResult), | |||||
calculate_ucb_score(Root) | |||||
end, | |||||
0, | |||||
lists:seq(1, MaxIterations) | |||||
). | |||||
%% 强化学习应用 | |||||
apply_reinforcement_learning(State, EvaluatedActions) -> | |||||
LearningModel = State#advanced_ai_state.learning_model, | |||||
lists:map( | |||||
fun({Action, Score}) -> | |||||
RefinedScore = apply_learning_policy(LearningModel, Action, Score), | |||||
{Action, RefinedScore} | |||||
end, | |||||
EvaluatedActions | |||||
). | |||||
%% 深度学习更新 | |||||
apply_deep_learning_update(State, GameRecord) -> | |||||
% 提取特征 | |||||
Features = extract_game_features(GameRecord), | |||||
% 准备训练数据 | |||||
TrainingData = prepare_training_data(Features, GameRecord), | |||||
% 更新模型 | |||||
update_deep_learning_model(State#advanced_ai_state.learning_model, TrainingData). | |||||
%% 策略建议生成 | |||||
suggest_strategies(State, WinProbability) -> | |||||
case WinProbability of | |||||
P when P >= 0.7 -> | |||||
[aggressive_push, maintain_control]; | |||||
P when P >= 0.4 -> | |||||
[balanced_play, seek_opportunities]; | |||||
_ -> | |||||
[defensive_play, preserve_key_cards] | |||||
end. | |||||
%% 高级模式识别 | |||||
identify_advanced_patterns(Cards, PatternDB) -> | |||||
BasePatterns = find_base_patterns(Cards), | |||||
ComplexPatterns = find_complex_patterns(Cards), | |||||
SpecialCombos = find_special_combinations(Cards, PatternDB), | |||||
#{ | |||||
base_patterns => BasePatterns, | |||||
complex_patterns => ComplexPatterns, | |||||
special_combos => SpecialCombos, | |||||
pattern_value => evaluate_pattern_combination_value(BasePatterns, ComplexPatterns, SpecialCombos) | |||||
}. | |||||
%% 对手建模 | |||||
create_new_opponent_model() -> | |||||
#{ | |||||
play_style => undefined, | |||||
pattern_preferences => #{}, | |||||
risk_tendency => 0.5, | |||||
skill_level => 0.5, | |||||
historical_plays => [] | |||||
}. | |||||
%% 更新对手模型 | |||||
update_opponent_models(State, GameRecord) -> | |||||
lists:foldl( | |||||
fun(Play, Models) -> | |||||
update_single_opponent_model(Models, Play) | |||||
end, | |||||
State#advanced_ai_state.opponent_models, | |||||
extract_plays(GameRecord) | |||||
). | |||||
%% 策略评估 | |||||
evaluate_strategy_effectiveness(Strategy, GameState) -> | |||||
ControlFactor = evaluate_control_factor(Strategy, GameState), | |||||
TempoFactor = evaluate_tempo_factor(Strategy, GameState), | |||||
RiskFactor = evaluate_risk_factor(Strategy, GameState), | |||||
(ControlFactor * 0.4) + (TempoFactor * 0.3) + (RiskFactor * 0.3). |
@ -0,0 +1,73 @@ | |||||
-module(advanced_learning). | |||||
-export([init/0, train/2, predict/2, update/3]). | |||||
-record(learning_state, { | |||||
neural_network, % 深度神经网络模型 | |||||
experience_buffer, % 经验回放缓冲 | |||||
model_version, % 模型版本 | |||||
training_stats % 训练统计 | |||||
}). | |||||
%% 神经网络配置 | |||||
-define(NETWORK_CONFIG, [ | |||||
{input_layer, 512}, | |||||
{hidden_layer_1, 256}, | |||||
{hidden_layer_2, 128}, | |||||
{hidden_layer_3, 64}, | |||||
{output_layer, 32} | |||||
]). | |||||
%% 训练参数 | |||||
-define(LEARNING_RATE, 0.001). | |||||
-define(BATCH_SIZE, 64). | |||||
-define(EXPERIENCE_BUFFER_SIZE, 10000). | |||||
init() -> | |||||
Network = initialize_neural_network(?NETWORK_CONFIG), | |||||
#learning_state{ | |||||
neural_network = Network, | |||||
experience_buffer = queue:new(), | |||||
model_version = 1, | |||||
training_stats = #{ | |||||
total_games = 0, | |||||
win_rate = 0.0, | |||||
avg_reward = 0.0 | |||||
} | |||||
}. | |||||
train(State, TrainingData) -> | |||||
% 准备批次数据 | |||||
Batch = prepare_batch(TrainingData, ?BATCH_SIZE), | |||||
% 执行深度学习训练 | |||||
{UpdatedNetwork, Loss} = train_network(State#learning_state.neural_network, Batch), | |||||
% 更新统计信息 | |||||
NewStats = update_training_stats(State#learning_state.training_stats, Loss), | |||||
% 返回更新后的状态 | |||||
State#learning_state{ | |||||
neural_network = UpdatedNetwork, | |||||
training_stats = NewStats | |||||
}. | |||||
predict(State, Input) -> | |||||
% 使用神经网络进行预测 | |||||
Features = extract_features(Input), | |||||
Prediction = neural_network:forward(State#learning_state.neural_network, Features), | |||||
process_prediction(Prediction). | |||||
update(State, Experience, Reward) -> | |||||
% 更新经验缓冲 | |||||
NewBuffer = update_experience_buffer(State#learning_state.experience_buffer, | |||||
Experience, | |||||
Reward), | |||||
% 判断是否需要进行训练 | |||||
case should_train(NewBuffer) of | |||||
true -> | |||||
TrainingData = prepare_training_data(NewBuffer), | |||||
train(State#learning_state{experience_buffer = NewBuffer}, TrainingData); | |||||
false -> | |||||
State#learning_state{experience_buffer = NewBuffer} | |||||
end. |
@ -0,0 +1,176 @@ | |||||
-module(ai_core). | |||||
-export([init_ai/1, make_decision/2, update_strategy/3]). | |||||
-record(ai_state, { | |||||
personality, % aggressive | conservative | balanced | |||||
strategy_weights, % 策略权重 | |||||
knowledge_base, % 知识库 | |||||
game_history = [] % 游戏历史 | |||||
}). | |||||
%% AI初始化 | |||||
init_ai(Personality) -> | |||||
#ai_state{ | |||||
personality = Personality, | |||||
strategy_weights = init_weights(Personality), | |||||
knowledge_base = init_knowledge_base() | |||||
}. | |||||
%% 决策制定 | |||||
make_decision(AIState, GameState) -> | |||||
% 分析当前局势 | |||||
Situation = analyze_situation(GameState), | |||||
% 生成可能的行动 | |||||
PossiblePlays = generate_possible_plays(GameState), | |||||
% 评估每个行动 | |||||
RatedPlays = evaluate_plays(PossiblePlays, AIState, Situation), | |||||
% 选择最佳行动 | |||||
select_best_play(RatedPlays, AIState). | |||||
%% 策略更新 | |||||
update_strategy(AIState, GameResult, GameHistory) -> | |||||
NewWeights = adjust_weights(AIState#ai_state.strategy_weights, GameResult), | |||||
NewKnowledge = update_knowledge(AIState#ai_state.knowledge_base, GameHistory), | |||||
AIState#ai_state{ | |||||
strategy_weights = NewWeights, | |||||
knowledge_base = NewKnowledge, | |||||
game_history = [GameHistory | AIState#ai_state.game_history] | |||||
}. | |||||
%% 内部函数 | |||||
init_weights(aggressive) -> | |||||
#{ | |||||
control_weight => 0.8, | |||||
attack_weight => 0.7, | |||||
defense_weight => 0.3, | |||||
risk_weight => 0.6 | |||||
}; | |||||
init_weights(conservative) -> | |||||
#{ | |||||
control_weight => 0.5, | |||||
attack_weight => 0.4, | |||||
defense_weight => 0.8, | |||||
risk_weight => 0.3 | |||||
}; | |||||
init_weights(balanced) -> | |||||
#{ | |||||
control_weight => 0.6, | |||||
attack_weight => 0.6, | |||||
defense_weight => 0.6, | |||||
risk_weight => 0.5 | |||||
}. | |||||
analyze_situation(GameState) -> | |||||
#{ | |||||
hand_strength => evaluate_hand_strength(GameState), | |||||
control_status => evaluate_control(GameState), | |||||
opponent_cards => estimate_opponent_cards(GameState), | |||||
game_stage => determine_game_stage(GameState) | |||||
}. | |||||
generate_possible_plays(GameState) -> | |||||
MyCards = get_my_cards(GameState), | |||||
LastPlay = get_last_play(GameState), | |||||
generate_valid_plays(MyCards, LastPlay). | |||||
evaluate_plays(Plays, AIState, Situation) -> | |||||
lists:map( | |||||
fun(Play) -> | |||||
Score = calculate_play_score(Play, AIState, Situation), | |||||
{Play, Score} | |||||
end, | |||||
Plays | |||||
). | |||||
calculate_play_score(Play, AIState, Situation) -> | |||||
Weights = AIState#ai_state.strategy_weights, | |||||
ControlScore = evaluate_control_value(Play, Situation) * | |||||
maps:get(control_weight, Weights), | |||||
AttackScore = evaluate_attack_value(Play, Situation) * | |||||
maps:get(attack_weight, Weights), | |||||
DefenseScore = evaluate_defense_value(Play, Situation) * | |||||
maps:get(defense_weight, Weights), | |||||
RiskScore = evaluate_risk_value(Play, Situation) * | |||||
maps:get(risk_weight, Weights), | |||||
ControlScore + AttackScore + DefenseScore + RiskScore. | |||||
select_best_play(RatedPlays, AIState) -> | |||||
case AIState#ai_state.personality of | |||||
aggressive -> | |||||
select_aggressive(RatedPlays); | |||||
conservative -> | |||||
select_conservative(RatedPlays); | |||||
balanced -> | |||||
select_balanced(RatedPlays) | |||||
end. | |||||
%% 策略选择函数 | |||||
select_aggressive(RatedPlays) -> | |||||
% 倾向于选择得分最高的行动 | |||||
{Play, _Score} = lists:max(RatedPlays), | |||||
Play. | |||||
select_conservative(RatedPlays) -> | |||||
% 倾向于选择风险较低的行动 | |||||
SafePlays = filter_safe_plays(RatedPlays), | |||||
case SafePlays of | |||||
[] -> select_balanced(RatedPlays); | |||||
_ -> select_from_safe_plays(SafePlays) | |||||
end. | |||||
select_balanced(RatedPlays) -> | |||||
% 在得分和风险之间寻找平衡 | |||||
{Play, _Score} = select_balanced_play(RatedPlays), | |||||
Play. | |||||
%% 评估函数 | |||||
evaluate_hand_strength(GameState) -> | |||||
Cards = get_my_cards(GameState), | |||||
calculate_hand_value(Cards). | |||||
evaluate_control(GameState) -> | |||||
% 评估是否控制局势 | |||||
LastPlay = get_last_play(GameState), | |||||
MyCards = get_my_cards(GameState), | |||||
can_control_game(MyCards, LastPlay). | |||||
estimate_opponent_cards(GameState) -> | |||||
% 基于已出牌情况估计对手手牌 | |||||
PlayedCards = get_played_cards(GameState), | |||||
MyCards = get_my_cards(GameState), | |||||
estimate_remaining_cards(PlayedCards, MyCards). | |||||
%% 知识库更新 | |||||
update_knowledge(KnowledgeBase, GameHistory) -> | |||||
% 更新AI的知识库 | |||||
NewPatterns = extract_patterns(GameHistory), | |||||
merge_knowledge(KnowledgeBase, NewPatterns). | |||||
extract_patterns(GameHistory) -> | |||||
% 从游戏历史中提取出牌模式 | |||||
lists:foldl( | |||||
fun(Play, Patterns) -> | |||||
Pattern = analyze_play_pattern(Play), | |||||
update_pattern_stats(Pattern, Patterns) | |||||
end, | |||||
#{}, | |||||
GameHistory | |||||
). | |||||
merge_knowledge(Old, New) -> | |||||
maps:merge_with( | |||||
fun(_Key, OldValue, NewValue) -> | |||||
update_knowledge_value(OldValue, NewValue) | |||||
end, | |||||
Old, | |||||
New | |||||
). |
@ -0,0 +1,29 @@ | |||||
-module(ai_optimizer). | |||||
-export([optimize_ai_system/2, tune_parameters/2]). | |||||
optimize_ai_system(AIState, Metrics) -> | |||||
% 分析性能指标 | |||||
PerformanceAnalysis = analyze_performance_metrics(Metrics), | |||||
% 优化决策系统 | |||||
OptimizedDecisionSystem = optimize_decision_system(AIState, PerformanceAnalysis), | |||||
% 优化学习系统 | |||||
OptimizedLearningSystem = optimize_learning_system(AIState, PerformanceAnalysis), | |||||
% 更新AI状态 | |||||
AIState#ai_state{ | |||||
decision_system = OptimizedDecisionSystem, | |||||
learning_system = OptimizedLearningSystem | |||||
}. | |||||
tune_parameters(Parameters, Performance) -> | |||||
% 参数优化逻辑 | |||||
OptimizedParams = lists:map( | |||||
fun({Param, Value}) -> | |||||
NewValue = adjust_parameter(Param, Value, Performance), | |||||
{Param, NewValue} | |||||
end, | |||||
Parameters | |||||
), | |||||
maps:from_list(OptimizedParams). |
@ -0,0 +1,158 @@ | |||||
-module(ai_player). | |||||
-behaviour(gen_server). | |||||
-export([start_link/1, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([create_ai_player/1, play_turn/1]). | |||||
-record(state, { | |||||
name, | |||||
game_pid, | |||||
cards = [], | |||||
difficulty = normal, % easy | normal | hard | |||||
strategy = undefined, % landlord | farmer | |||||
last_play = [] | |||||
}). | |||||
%% AI难度设置 | |||||
-define(EASY_THINK_TIME, 1000). % 1秒 | |||||
-define(NORMAL_THINK_TIME, 500). % 0.5秒 | |||||
-define(HARD_THINK_TIME, 200). % 0.2秒 | |||||
%% API | |||||
start_link(Name) -> | |||||
gen_server:start_link(?MODULE, [Name], []). | |||||
create_ai_player(Difficulty) -> | |||||
Name = generate_ai_name(), | |||||
{ok, Pid} = start_link(Name), | |||||
gen_server:cast(Pid, {set_difficulty, Difficulty}), | |||||
{ok, Pid}. | |||||
play_turn(AiPid) -> | |||||
gen_server:cast(AiPid, play_turn). | |||||
%% Callbacks | |||||
init([Name]) -> | |||||
{ok, #state{name = Name}}. | |||||
handle_call(get_name, _From, State) -> | |||||
{reply, {ok, State#state.name}, State}; | |||||
handle_call(_Request, _From, State) -> | |||||
{reply, {error, unknown_call}, State}. | |||||
handle_cast({set_difficulty, Difficulty}, State) -> | |||||
{noreply, State#state{difficulty = Difficulty}}; | |||||
handle_cast({update_cards, Cards}, State) -> | |||||
{noreply, State#state{cards = Cards}}; | |||||
handle_cast({set_strategy, Strategy}, State) -> | |||||
{noreply, State#state{strategy = Strategy}}; | |||||
handle_cast(play_turn, State) -> | |||||
timer:sleep(get_think_time(State#state.difficulty)), | |||||
{Play, NewState} = calculate_best_play(State), | |||||
case Play of | |||||
pass -> | |||||
game_server:pass(State#state.game_pid, self()); | |||||
Cards -> | |||||
game_server:play_cards(State#state.game_pid, self(), Cards) | |||||
end, | |||||
{noreply, NewState}; | |||||
handle_cast(_Msg, State) -> | |||||
{noreply, State}. | |||||
handle_info(_Info, State) -> | |||||
{noreply, State}. | |||||
%% 内部函数 | |||||
% 生成AI玩家名称 | |||||
generate_ai_name() -> | |||||
Names = ["AlphaBot", "DeepPlayer", "SmartAI", "MasterBot", "ProBot"], | |||||
RandomName = lists:nth(rand:uniform(length(Names)), Names), | |||||
RandomNum = integer_to_list(rand:uniform(999)), | |||||
RandomName ++ "_" ++ RandomNum. | |||||
% 获取思考时间 | |||||
get_think_time(easy) -> ?EASY_THINK_TIME; | |||||
get_think_time(normal) -> ?NORMAL_THINK_TIME; | |||||
get_think_time(hard) -> ?HARD_THINK_TIME. | |||||
% 计算最佳出牌 | |||||
calculate_best_play(State = #state{cards = Cards, strategy = Strategy, last_play = LastPlay}) -> | |||||
case Strategy of | |||||
landlord -> calculate_landlord_play(Cards, LastPlay, State); | |||||
farmer -> calculate_farmer_play(Cards, LastPlay, State) | |||||
end. | |||||
% 地主策略 | |||||
calculate_landlord_play(Cards, LastPlay, State) -> | |||||
case should_play_big(Cards, LastPlay, State) of | |||||
true -> | |||||
{get_biggest_play(Cards, LastPlay), State}; | |||||
false -> | |||||
{get_optimal_play(Cards, LastPlay), State} | |||||
end. | |||||
% 农民策略 | |||||
calculate_farmer_play(Cards, LastPlay, State) -> | |||||
case should_save_cards(Cards, LastPlay, State) of | |||||
true -> | |||||
{pass, State}; | |||||
false -> | |||||
{get_optimal_play(Cards, LastPlay), State} | |||||
end. | |||||
% 判断是否应该出大牌 | |||||
should_play_big(Cards, LastPlay, _State) -> | |||||
RemainingCount = length(Cards), | |||||
case RemainingCount of | |||||
1 -> true; | |||||
2 -> true; | |||||
_ -> | |||||
has_winning_chance(Cards, LastPlay) | |||||
end. | |||||
% 判断是否应该保留手牌 | |||||
should_save_cards(Cards, LastPlay, _State) -> | |||||
case LastPlay of | |||||
[] -> false; | |||||
_ -> | |||||
RemainingCount = length(Cards), | |||||
OtherPlayerCards = estimate_other_players_cards(), | |||||
RemainingCount < 5 andalso OtherPlayerCards > RemainingCount * 2 | |||||
end. | |||||
% 估算其他玩家的牌数 | |||||
estimate_other_players_cards() -> | |||||
15. % 这是一个简化的估算,实际实现需要根据游戏进程动态计算 | |||||
% 检查是否有获胜机会 | |||||
has_winning_chance(Cards, _LastPlay) -> | |||||
% 简化版本的获胜概率计算 | |||||
length(Cards) =< 4. | |||||
% 获取最大的可出牌组合 | |||||
get_biggest_play(Cards, LastPlay) -> | |||||
% 实现查找最大牌型的逻辑 | |||||
% 这里需要调用 card_rules 模块来判断牌型 | |||||
case LastPlay of | |||||
[] -> | |||||
find_biggest_combination(Cards); | |||||
_ -> | |||||
find_bigger_combination(Cards, LastPlay) | |||||
end. | |||||
% 获取最优的出牌组合 | |||||
get_optimal_play(Cards, LastPlay) -> | |||||
% 实现最优出牌策略 | |||||
% 考虑因素:手牌数量、牌型组合、出牌节奏等 | |||||
case LastPlay of | |||||
[] -> | |||||
find_optimal_combination(Cards); | |||||
_ -> | |||||
find_minimum_bigger_combination(Cards, LastPlay) | |||||
end. |
@ -0,0 +1,160 @@ | |||||
-module(ai_strategy). | |||||
-export([initialize_strategy/0, update_strategy/2, make_decision/2, | |||||
analyze_game_state/1, evaluate_play/3]). | |||||
-record(game_state, { | |||||
hand_cards, % 手牌 | |||||
played_cards = [], % 已出牌 | |||||
player_position, % 玩家位置(地主/农民) | |||||
remaining_cards, % 剩余牌数 | |||||
stage, % 游戏阶段 | |||||
last_play, % 上家出牌 | |||||
control_status % 是否控制局面 | |||||
}). | |||||
%% 策略权重配置 | |||||
-define(STRATEGY_WEIGHTS, #{ | |||||
early_game => #{ | |||||
control_weight => 0.7, | |||||
combo_weight => 0.5, | |||||
defensive_weight => 0.3, | |||||
risk_weight => 0.4 | |||||
}, | |||||
mid_game => #{ | |||||
control_weight => 0.5, | |||||
combo_weight => 0.6, | |||||
defensive_weight => 0.5, | |||||
risk_weight => 0.5 | |||||
}, | |||||
late_game => #{ | |||||
control_weight => 0.3, | |||||
combo_weight => 0.8, | |||||
defensive_weight => 0.7, | |||||
risk_weight => 0.6 | |||||
} | |||||
}). | |||||
%% 初始化策略系统 | |||||
initialize_strategy() -> | |||||
#{ | |||||
weights => ?STRATEGY_WEIGHTS, | |||||
learning_rate => 0.01, | |||||
adaptation_rate => 0.1, | |||||
experience => #{}, | |||||
history => [] | |||||
}. | |||||
%% 更新策略 | |||||
update_strategy(Strategy, GameState) -> | |||||
Stage = determine_game_stage(GameState), | |||||
UpdatedWeights = adjust_weights(Strategy, Stage, GameState), | |||||
NewExperience = update_experience(Strategy, GameState), | |||||
Strategy#{ | |||||
weights => UpdatedWeights, | |||||
experience => NewExperience, | |||||
history => [GameState | maps:get(history, Strategy)] | |||||
}. | |||||
%% 做出决策 | |||||
make_decision(Strategy, GameState) -> | |||||
PossiblePlays = generate_possible_plays(GameState#game_state.hand_cards), | |||||
EvaluatedPlays = evaluate_all_plays(PossiblePlays, Strategy, GameState), | |||||
select_best_play(EvaluatedPlays, Strategy, GameState). | |||||
%% 评估所有可能的出牌 | |||||
evaluate_all_plays(Plays, Strategy, GameState) -> | |||||
lists:map( | |||||
fun(Play) -> | |||||
Score = evaluate_play(Play, Strategy, GameState), | |||||
{Play, Score} | |||||
end, | |||||
Plays | |||||
). | |||||
%% 评估单个出牌 | |||||
evaluate_play(Play, Strategy, GameState) -> | |||||
Weights = get_stage_weights(Strategy, GameState#game_state.stage), | |||||
ControlScore = evaluate_control(Play, GameState) * maps:get(control_weight, Weights), | |||||
ComboScore = evaluate_combo(Play, GameState) * maps:get(combo_weight, Weights), | |||||
DefensiveScore = evaluate_defensive(Play, GameState) * maps:get(defensive_weight, Weights), | |||||
RiskScore = evaluate_risk(Play, GameState) * maps:get(risk_weight, Weights), | |||||
ControlScore + ComboScore + DefensiveScore + RiskScore. | |||||
%% 选择最佳出牌 | |||||
select_best_play(EvaluatedPlays, Strategy, GameState) -> | |||||
case should_apply_randomness(Strategy, GameState) of | |||||
true -> | |||||
apply_randomness(EvaluatedPlays); | |||||
false -> | |||||
{Play, _Score} = lists:max(EvaluatedPlays), | |||||
Play | |||||
end. | |||||
%% 内部辅助函数 | |||||
determine_game_stage(#game_state{remaining_cards = Remaining}) -> | |||||
case Remaining of | |||||
N when N > 15 -> early_game; | |||||
N when N > 8 -> mid_game; | |||||
_ -> late_game | |||||
end. | |||||
adjust_weights(Strategy, Stage, GameState) -> | |||||
CurrentWeights = maps:get(Stage, maps:get(weights, Strategy)), | |||||
AdaptationRate = maps:get(adaptation_rate, Strategy), | |||||
% 基于游戏状态调整权重 | |||||
adjust_weight_based_on_state(CurrentWeights, GameState, AdaptationRate). | |||||
update_experience(Strategy, GameState) -> | |||||
Experience = maps:get(experience, Strategy), | |||||
GamePattern = extract_game_pattern(GameState), | |||||
maps:update_with( | |||||
GamePattern, | |||||
fun(Count) -> Count + 1 end, | |||||
1, | |||||
Experience | |||||
). | |||||
evaluate_control(Play, GameState) -> | |||||
case Play of | |||||
pass -> 0.0; | |||||
_ -> | |||||
RemainingControl = calculate_remaining_control(GameState), | |||||
PlayStrength = game_logic:calculate_card_value(Play), | |||||
RemainingControl * PlayStrength / 100.0 | |||||
end. | |||||
evaluate_combo(Play, GameState) -> | |||||
RemainingCombos = count_remaining_combos(GameState#game_state.hand_cards -- Play), | |||||
case RemainingCombos of | |||||
0 -> 1.0; | |||||
_ -> 0.8 * (1 - 1/RemainingCombos) | |||||
end. | |||||
evaluate_defensive(Play, GameState) -> | |||||
case GameState#game_state.player_position of | |||||
farmer -> | |||||
evaluate_farmer_defensive(Play, GameState); | |||||
landlord -> | |||||
evaluate_landlord_defensive(Play, GameState) | |||||
end. | |||||
evaluate_risk(Play, GameState) -> | |||||
case is_risky_play(Play, GameState) of | |||||
true -> 0.3; | |||||
false -> 1.0 | |||||
end. | |||||
should_apply_randomness(Strategy, GameState) -> | |||||
ExperienceCount = maps:size(maps:get(experience, Strategy)), | |||||
ExperienceCount < 1000 orelse is_close_game(GameState). | |||||
apply_randomness(EvaluatedPlays) -> | |||||
RandomFactor = 0.1, | |||||
Plays = [{Play, Score + (rand:uniform() * RandomFactor)} || {Play, Score} <- EvaluatedPlays], | |||||
{SelectedPlay, _} = lists:max(Plays), | |||||
SelectedPlay. |
@ -0,0 +1,58 @@ | |||||
-module(ai_test). | |||||
-export([run_test/0]). | |||||
run_test() -> | |||||
% 启动所有必要的服务 | |||||
{ok, DL} = deep_learning:start_link(), | |||||
{ok, PC} = parallel_compute:start_link(), | |||||
{ok, PM} = performance_monitor:start_link(), | |||||
{ok, VS} = visualization:start_link(), | |||||
% 创建测试数据 | |||||
TestData = create_test_data(), | |||||
% 训练网络 | |||||
{ok, Network} = deep_learning:train_network(test_network, TestData), | |||||
% 进行预测 | |||||
TestInput = prepare_test_input(), | |||||
{ok, Prediction} = deep_learning:predict(test_network, TestInput), | |||||
% 监控性能 | |||||
{ok, MonitorId} = performance_monitor:start_monitoring(test_network), | |||||
% 等待一些时间收集数据 | |||||
timer:sleep(5000), | |||||
% 获取性能数据 | |||||
{ok, Metrics} = performance_monitor:get_metrics(MonitorId), | |||||
% 创建可视化 | |||||
{ok, ChartId} = visualization:create_chart(line_chart, Metrics), | |||||
% 导出结果 | |||||
{ok, Report} = performance_monitor:generate_report(MonitorId), | |||||
{ok, Chart} = visualization:export_chart(ChartId, png), | |||||
% 清理资源 | |||||
ok = performance_monitor:stop_monitoring(MonitorId), | |||||
% 返回测试结果 | |||||
#{ | |||||
prediction => Prediction, | |||||
metrics => Metrics, | |||||
report => Report, | |||||
chart => Chart | |||||
}. | |||||
% 辅助函数 | |||||
create_test_data() -> | |||||
[ | |||||
{[1,2,3], [4]}, | |||||
{[2,3,4], [5]}, | |||||
{[3,4,5], [6]}, | |||||
{[4,5,6], [7]} | |||||
]. | |||||
prepare_test_input() -> | |||||
[5,6,7]. |
@ -0,0 +1,106 @@ | |||||
-module(auto_player). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([take_over/1, release/1, is_auto_playing/1]). | |||||
-record(state, { | |||||
controlled_players = #{}, % Map: PlayerPid -> OriginalPlayer | |||||
active_games = #{} % Map: GamePid -> [{PlayerPid, Cards}] | |||||
}). | |||||
%% API | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
take_over(PlayerPid) -> | |||||
gen_server:call(?MODULE, {take_over, PlayerPid}). | |||||
release(PlayerPid) -> | |||||
gen_server:call(?MODULE, {release, PlayerPid}). | |||||
is_auto_playing(PlayerPid) -> | |||||
gen_server:call(?MODULE, {is_auto_playing, PlayerPid}). | |||||
%% Callbacks | |||||
init([]) -> | |||||
{ok, #state{}}. | |||||
handle_call({take_over, PlayerPid}, _From, State = #state{controlled_players = Players}) -> | |||||
case maps:is_key(PlayerPid, Players) of | |||||
true -> | |||||
{reply, {error, already_controlled}, State}; | |||||
false -> | |||||
NewPlayers = maps:put(PlayerPid, {auto, os:timestamp()}, Players), | |||||
{reply, ok, State#state{controlled_players = NewPlayers}} | |||||
end; | |||||
handle_call({release, PlayerPid}, _From, State = #state{controlled_players = Players}) -> | |||||
NewPlayers = maps:remove(PlayerPid, Players), | |||||
{reply, ok, State#state{controlled_players = NewPlayers}}; | |||||
handle_call({is_auto_playing, PlayerPid}, _From, State = #state{controlled_players = Players}) -> | |||||
{reply, maps:is_key(PlayerPid, Players), State}; | |||||
handle_call(_Request, _From, State) -> | |||||
{reply, {error, unknown_call}, State}. | |||||
handle_cast({game_update, GamePid, PlayerPid, Cards}, State) -> | |||||
NewState = update_game_state(GamePid, PlayerPid, Cards, State), | |||||
maybe_play_turn(GamePid, PlayerPid, NewState), | |||||
{noreply, NewState}; | |||||
handle_cast(_Msg, State) -> | |||||
{noreply, State}. | |||||
handle_info(_Info, State) -> | |||||
{noreply, State}. | |||||
%% 内部函数 | |||||
update_game_state(GamePid, PlayerPid, Cards, State = #state{active_games = Games}) -> | |||||
GamePlayers = maps:get(GamePid, Games, []), | |||||
UpdatedPlayers = lists:keystore(PlayerPid, 1, GamePlayers, {PlayerPid, Cards}), | |||||
NewGames = maps:put(GamePid, UpdatedPlayers, Games), | |||||
State#state{active_games = NewGames}. | |||||
maybe_play_turn(GamePid, PlayerPid, State) -> | |||||
case should_play_turn(GamePid, PlayerPid, State) of | |||||
true -> | |||||
timer:sleep(rand:uniform(1000) + 500), % 添加随机延迟使行为更自然 | |||||
Play = calculate_auto_play(GamePid, PlayerPid, State), | |||||
execute_play(GamePid, PlayerPid, Play); | |||||
false -> | |||||
ok | |||||
end. | |||||
should_play_turn(GamePid, PlayerPid, #state{controlled_players = Players}) -> | |||||
maps:is_key(PlayerPid, Players) andalso is_current_player(GamePid, PlayerPid). | |||||
is_current_player(GamePid, PlayerPid) -> | |||||
% 从游戏服务器获取当前玩家 | |||||
case game_server:get_current_player(GamePid) of | |||||
{ok, CurrentPlayer} -> CurrentPlayer =:= PlayerPid; | |||||
_ -> false | |||||
end. | |||||
calculate_auto_play(GamePid, PlayerPid, #state{active_games = Games}) -> | |||||
GameState = maps:get(GamePid, Games, []), | |||||
{_, Cards} = lists:keyfind(PlayerPid, 1, GameState), | |||||
LastPlay = game_server:get_last_play(GamePid), | |||||
case can_beat_play(Cards, LastPlay) of | |||||
{true, Play} -> {play, Play}; | |||||
false -> pass | |||||
end. | |||||
execute_play(_GamePid, _PlayerPid, pass) -> | |||||
game_server:pass(_GamePid, _PlayerPid); | |||||
execute_play(_GamePid, _PlayerPid, {play, Cards}) -> | |||||
game_server:play_cards(_GamePid, _PlayerPid, Cards). | |||||
can_beat_play(Cards, LastPlay) -> | |||||
% 使用 card_rules 模块来找出可以压过上家的牌 | |||||
case card_rules:find_valid_plays(Cards, LastPlay) of | |||||
[] -> false; | |||||
[BestPlay|_] -> {true, BestPlay} | |||||
end. |
@ -0,0 +1,15 @@ | |||||
{application, cardSrv, | |||||
[{description, "An OTP application"}, | |||||
{vsn, "0.1.0"}, | |||||
{registered, []}, | |||||
{mod, {cardSrv_app, []}}, | |||||
{applications, | |||||
[kernel, | |||||
stdlib | |||||
]}, | |||||
{env,[]}, | |||||
{modules, []}, | |||||
{licenses, ["Apache-2.0"]}, | |||||
{links, []} | |||||
]}. |
@ -0,0 +1,18 @@ | |||||
%%%------------------------------------------------------------------- | |||||
%% @doc cardSrv public API | |||||
%% @end | |||||
%%%------------------------------------------------------------------- | |||||
-module(cardSrv_app). | |||||
-behaviour(application). | |||||
-export([start/2, stop/1]). | |||||
start(_StartType, _StartArgs) -> | |||||
cardSrv_sup:start_link(). | |||||
stop(_State) -> | |||||
ok. | |||||
%% internal functions |
@ -0,0 +1,35 @@ | |||||
%%%------------------------------------------------------------------- | |||||
%% @doc cardSrv top level supervisor. | |||||
%% @end | |||||
%%%------------------------------------------------------------------- | |||||
-module(cardSrv_sup). | |||||
-behaviour(supervisor). | |||||
-export([start_link/0]). | |||||
-export([init/1]). | |||||
-define(SERVER, ?MODULE). | |||||
start_link() -> | |||||
supervisor:start_link({local, ?SERVER}, ?MODULE, []). | |||||
%% sup_flags() = #{strategy => strategy(), % optional | |||||
%% intensity => non_neg_integer(), % optional | |||||
%% period => pos_integer()} % optional | |||||
%% child_spec() = #{id => child_id(), % mandatory | |||||
%% start => mfargs(), % mandatory | |||||
%% restart => restart(), % optional | |||||
%% shutdown => shutdown(), % optional | |||||
%% type => worker(), % optional | |||||
%% modules => modules()} % optional | |||||
init([]) -> | |||||
SupFlags = #{strategy => one_for_all, | |||||
intensity => 0, | |||||
period => 1}, | |||||
ChildSpecs = [], | |||||
{ok, {SupFlags, ChildSpecs}}. | |||||
%% internal functions |
@ -0,0 +1,93 @@ | |||||
-module(card_rules). | |||||
-export([validate_play/2, get_card_type/1, compare_cards/2]). | |||||
%% 验证出牌是否合法 | |||||
validate_play(Cards, LastPlay) -> | |||||
case {get_card_type(Cards), get_card_type(LastPlay)} of | |||||
{invalid, _} -> false; | |||||
{_, undefined} -> true; % 首次出牌 | |||||
{Type, Type} -> compare_cards(Cards, LastPlay); | |||||
{bomb, _} -> true; | |||||
{rocket, _} -> true; | |||||
_ -> false | |||||
end. | |||||
%% 获取牌型 | |||||
get_card_type(Cards) -> | |||||
case length(Cards) of | |||||
0 -> undefined; | |||||
1 -> single; | |||||
2 -> check_pair(Cards); | |||||
3 -> check_three(Cards); | |||||
4 -> check_bomb_or_three_one(Cards); | |||||
_ -> check_sequence(Cards) | |||||
end. | |||||
%% 检查对子 | |||||
check_pair([{_, N}, {_, N}]) -> pair; | |||||
check_pair(_) -> invalid. | |||||
%% 检查三张 | |||||
check_three([{_, N}, {_, N}, {_, N}]) -> three; | |||||
check_three(_) -> invalid. | |||||
%% 检查炸弹或三带一 | |||||
check_bomb_or_three_one(Cards) -> | |||||
Grouped = group_cards(Cards), | |||||
case Grouped of | |||||
[{_, 4}|_] -> bomb; | |||||
[{_, 3}, {_, 1}] -> three_one; | |||||
_ -> invalid | |||||
end. | |||||
%% 检查顺子等 | |||||
check_sequence(Cards) -> | |||||
case is_straight(Cards) of | |||||
true -> straight; | |||||
false -> check_other_types(Cards) | |||||
end. | |||||
%% 比较牌大小 | |||||
compare_cards(Cards1, Cards2) -> | |||||
case {get_card_type(Cards1), get_card_type(Cards2)} of | |||||
{rocket, _} -> true; | |||||
{bomb, OtherType} when OtherType =/= bomb -> true; | |||||
{Same, Same} -> compare_value(Cards1, Cards2); | |||||
_ -> false | |||||
end. | |||||
%% 辅助函数 - 对牌进行分组 | |||||
group_cards(Cards) -> | |||||
Dict = lists:foldl( | |||||
fun({_, N}, Acc) -> | |||||
dict:update_counter(N, 1, Acc) | |||||
end, | |||||
dict:new(), | |||||
Cards | |||||
), | |||||
lists:sort( | |||||
fun({_, Count1}, {_, Count2}) -> Count1 >= Count2 end, | |||||
dict:to_list(Dict) | |||||
). | |||||
%% 辅助函数 - 判断是否为顺子 | |||||
is_straight(Cards) -> | |||||
CardValues = [{"3",3}, {"4",4}, {"5",5}, {"6",6}, {"7",7}, {"8",8}, {"9",9}, | |||||
{"10",10}, {"J",11}, {"Q",12}, {"K",13}, {"A",14}, {"2",15}], | |||||
Values = [proplists:get_value(N, CardValues) || {_, N} <- Cards], | |||||
SortedValues = lists:sort(Values), | |||||
case length(SortedValues) >= 5 of | |||||
true -> | |||||
lists:all(fun({A, B}) -> B - A =:= 1 end, | |||||
lists:zip(SortedValues, tl(SortedValues))); | |||||
false -> false | |||||
end. | |||||
%% 比较相同牌型的大小 | |||||
compare_value(Cards1, Cards2) -> | |||||
CardValues = [{"3",3}, {"4",4}, {"5",5}, {"6",6}, {"7",7}, {"8",8}, {"9",9}, | |||||
{"10",10}, {"J",11}, {"Q",12}, {"K",13}, {"A",14}, {"2",15}, | |||||
{"小王",16}, {"大王",17}], | |||||
Max1 = lists:max([proplists:get_value(N, CardValues) || {_, N} <- Cards1]), | |||||
Max2 = lists:max([proplists:get_value(N, CardValues) || {_, N} <- Cards2]), | |||||
Max1 > Max2. |
@ -0,0 +1,34 @@ | |||||
-module(cards). | |||||
-export([init_cards/0, shuffle_cards/1, deal_cards/1, sort_cards/1]). | |||||
%% 初始化一副牌 | |||||
init_cards() -> | |||||
Colors = ["♠", "♥", "♣", "♦"], | |||||
Numbers = ["3","4","5","6","7","8","9","10","J","Q","K","A","2"], | |||||
Cards = [{Color, Number} || Color <- Colors, Number <- Numbers], | |||||
[{joker, "小王"}, {joker, "大王"}] ++ Cards. | |||||
%% 洗牌 | |||||
shuffle_cards(Cards) -> | |||||
List = [{rand:uniform(), Card} || Card <- Cards], | |||||
[Card || {_, Card} <- lists:sort(List)]. | |||||
%% 发牌 - 返回{Player1Cards, Player2Cards, Player3Cards, LandlordCards} | |||||
deal_cards(Cards) -> | |||||
{First17, Rest} = lists:split(17, Cards), | |||||
{Second17, Rest2} = lists:split(17, Rest), | |||||
{Third17, LandlordCards} = lists:split(17, Rest2), | |||||
{First17, Second17, Third17, LandlordCards}. | |||||
%% 排序牌 - 根据大小排序 | |||||
sort_cards(Cards) -> | |||||
CardValues = [{"3",3}, {"4",4}, {"5",5}, {"6",6}, {"7",7}, {"8",8}, {"9",9}, | |||||
{"10",10}, {"J",11}, {"Q",12}, {"K",13}, {"A",14}, {"2",15}, | |||||
{"小王",16}, {"大王",17}], | |||||
lists:sort( | |||||
fun({_, N1}, {_, N2}) -> | |||||
{_, V1} = lists:keyfind(N1, 1, CardValues), | |||||
{_, V2} = lists:keyfind(N2, 1, CardValues), | |||||
V1 =< V2 | |||||
end, | |||||
Cards). |
@ -0,0 +1,53 @@ | |||||
-module(decision_engine). | |||||
-export([make_decision/3, evaluate_options/2, calculate_win_probability/2]). | |||||
make_decision(GameState, AIState, Options) -> | |||||
% 深度评估每个选项 | |||||
EvaluatedOptions = deep_evaluate_options(Options, GameState, AIState), | |||||
% 计算胜率 | |||||
WinProbabilities = calculate_win_probabilities(EvaluatedOptions, GameState), | |||||
% 风险评估 | |||||
RiskAnalysis = analyze_risks(EvaluatedOptions, GameState), | |||||
% 综合决策 | |||||
BestOption = select_optimal_option(EvaluatedOptions, WinProbabilities, RiskAnalysis), | |||||
% 应用最终决策 | |||||
apply_decision(BestOption, GameState, AIState). | |||||
deep_evaluate_options(Options, GameState, AIState) -> | |||||
lists:map( | |||||
fun(Option) -> | |||||
% 深度搜索评估 | |||||
SearchResult = monte_carlo_search(Option, GameState, 1000), | |||||
% 策略评估 | |||||
StrategyScore = strategy_optimizer:evaluate_strategy(Option, GameState), | |||||
% 对手反应预测 | |||||
OpponentReaction = opponent_modeling:predict_play(AIState#ai_state.opponent_model, GameState), | |||||
% 综合评分 | |||||
{Option, calculate_comprehensive_score(SearchResult, StrategyScore, OpponentReaction)} | |||||
end, | |||||
Options | |||||
). | |||||
calculate_win_probability(Option, GameState) -> | |||||
% 基于当前局势分析 | |||||
SituationScore = analyze_situation_score(GameState), | |||||
% 基于牌型分析 | |||||
PatternScore = analyze_pattern_strength(Option), | |||||
% 基于对手模型 | |||||
OpponentScore = analyze_opponent_factors(GameState), | |||||
% 计算综合胜率 | |||||
calculate_combined_probability([ | |||||
{SituationScore, 0.4}, | |||||
{PatternScore, 0.3}, | |||||
{OpponentScore, 0.3} | |||||
]). |
@ -0,0 +1,104 @@ | |||||
-module(deep_learning). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([train_network/2, predict/2, update_network/2, get_network_stats/1]). | |||||
-record(state, { | |||||
networks = #{}, % Map: NetworkName -> NetworkData | |||||
training_queue = [], % 训练队列 | |||||
batch_size = 32, % 批次大小 | |||||
learning_rate = 0.001 % 学习率 | |||||
}). | |||||
-record(network, { | |||||
layers = [], % 网络层结构 | |||||
weights = #{}, % 权重 | |||||
biases = #{}, % 偏置 | |||||
activation = relu, % 激活函数 | |||||
optimizer = adam, % 优化器 | |||||
loss_history = [], % 损失历史 | |||||
accuracy_history = [] % 准确率历史 | |||||
}). | |||||
%% API | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
train_network(NetworkName, TrainingData) -> | |||||
gen_server:call(?MODULE, {train, NetworkName, TrainingData}). | |||||
predict(NetworkName, Input) -> | |||||
gen_server:call(?MODULE, {predict, NetworkName, Input}). | |||||
update_network(NetworkName, Gradients) -> | |||||
gen_server:cast(?MODULE, {update, NetworkName, Gradients}). | |||||
get_network_stats(NetworkName) -> | |||||
gen_server:call(?MODULE, {get_stats, NetworkName}). | |||||
%% 内部函数 | |||||
initialize_network(LayerSizes) -> | |||||
Layers = create_layers(LayerSizes), | |||||
Weights = initialize_weights(Layers), | |||||
Biases = initialize_biases(Layers), | |||||
#network{ | |||||
layers = Layers, | |||||
weights = Weights, | |||||
biases = Biases | |||||
}. | |||||
create_layers(Sizes) -> | |||||
lists:map(fun(Size) -> | |||||
#{size => Size, type => dense} | |||||
end, Sizes). | |||||
initialize_weights(Layers) -> | |||||
lists:foldl( | |||||
fun(Layer, Acc) -> | |||||
Size = maps:get(size, Layer), | |||||
W = random_matrix(Size, Size), | |||||
maps:put(Size, W, Acc) | |||||
end, | |||||
#{}, | |||||
Layers | |||||
). | |||||
random_matrix(Rows, Cols) -> | |||||
matrix:new(Rows, Cols, fun() -> rand:uniform() - 0.5 end). | |||||
forward_propagation(Input, Network) -> | |||||
#network{layers = Layers, weights = Weights, biases = Biases} = Network, | |||||
lists:foldl( | |||||
fun(Layer, Acc) -> | |||||
W = maps:get(maps:get(size, Layer), Weights), | |||||
B = maps:get(maps:get(size, Layer), Biases), | |||||
Z = matrix:add(matrix:multiply(W, Acc), B), | |||||
activate(Z, Network#network.activation) | |||||
end, | |||||
Input, | |||||
Layers | |||||
). | |||||
backward_propagation(Network, Input, Target) -> | |||||
% 实现反向传播算法 | |||||
{Gradients, Loss} = calculate_gradients(Network, Input, Target), | |||||
{Network#network{ | |||||
weights = update_weights(Network#network.weights, Gradients), | |||||
loss_history = [Loss | Network#network.loss_history] | |||||
}, Loss}. | |||||
activate(Z, relu) -> | |||||
matrix:map(fun(X) -> max(0, X) end, Z); | |||||
activate(Z, sigmoid) -> | |||||
matrix:map(fun(X) -> 1 / (1 + math:exp(-X)) end, Z); | |||||
activate(Z, tanh) -> | |||||
matrix:map(fun(X) -> math:tanh(X) end, Z). | |||||
optimize(Network, Gradients, adam) -> | |||||
% 实现Adam优化器 | |||||
update_adam(Network, Gradients); | |||||
optimize(Network, Gradients, sgd) -> | |||||
% 实现随机梯度下降 | |||||
update_sgd(Network, Gradients). |
@ -0,0 +1,136 @@ | |||||
-module(game_core). | |||||
-export([start_game/3, play_cards/3, get_game_state/1, is_valid_play/2]). | |||||
-export([init_deck/0, deal_cards/1, get_card_type/1, compare_cards/2]). | |||||
-record(game_state, { | |||||
players = [], % [{Pid, Cards, Role}] | |||||
current_player, % Pid | |||||
last_play = [], % {Pid, Cards} | |||||
played_cards = [], % [{Pid, Cards}] | |||||
stage = waiting, % waiting | playing | finished | |||||
landlord_cards = [] % 地主牌 | |||||
}). | |||||
%% 初始化扑克牌 | |||||
init_deck() -> | |||||
Colors = ["♠", "♥", "♣", "♦"], | |||||
Numbers = ["3", "4", "5", "6", "7", "8", "9", "10", "J", "Q", "K", "A", "2"], | |||||
Cards = [{Color, Number} || Color <- Colors, Number <- Numbers], | |||||
Jokers = [{"", "小王"}, {"", "大王"}], | |||||
shuffle(Cards ++ Jokers). | |||||
%% 发牌 | |||||
deal_cards(Deck) -> | |||||
{Players, Landlord} = lists:split(51, Deck), | |||||
{lists:sublist(Players, 1, 17), | |||||
lists:sublist(Players, 18, 17), | |||||
lists:sublist(Players, 35, 17), | |||||
Landlord}. | |||||
%% 开始游戏 | |||||
start_game(Player1, Player2, Player3) -> | |||||
Deck = init_deck(), | |||||
{Cards1, Cards2, Cards3, LandlordCards} = deal_cards(Deck), | |||||
% 随机选择地主 | |||||
LandlordIdx = rand:uniform(3), | |||||
Players = assign_roles([{Player1, Cards1}, {Player2, Cards2}, {Player3, Cards3}], LandlordIdx), | |||||
#game_state{ | |||||
players = Players, | |||||
current_player = element(1, lists:nth(LandlordIdx, Players)), | |||||
landlord_cards = LandlordCards | |||||
}. | |||||
%% 出牌 | |||||
play_cards(GameState, PlayerPid, Cards) -> | |||||
case validate_play(GameState, PlayerPid, Cards) of | |||||
true -> | |||||
NewState = update_game_state(GameState, PlayerPid, Cards), | |||||
check_game_end(NewState); | |||||
false -> | |||||
{error, invalid_play} | |||||
end. | |||||
%% 验证出牌 | |||||
validate_play(GameState, PlayerPid, Cards) -> | |||||
case get_player_cards(GameState, PlayerPid) of | |||||
{ok, PlayerCards} -> | |||||
has_cards(PlayerCards, Cards) andalso | |||||
is_valid_play(Cards, GameState#game_state.last_play); | |||||
_ -> | |||||
false | |||||
end. | |||||
%% 判断牌型 | |||||
get_card_type(Cards) -> | |||||
Sorted = sort_cards(Cards), | |||||
case analyze_pattern(Sorted) of | |||||
{Type, Value} -> {ok, Type, Value}; | |||||
invalid -> {error, invalid_pattern} | |||||
end. | |||||
%% 内部函数 | |||||
shuffle(List) -> | |||||
[X || {_, X} <- lists:sort([{rand:uniform(), N} || N <- List])]. | |||||
analyze_pattern(Cards) -> | |||||
case length(Cards) of | |||||
1 -> {single, card_value(hd(Cards))}; | |||||
2 -> analyze_pair(Cards); | |||||
3 -> analyze_triple(Cards); | |||||
4 -> analyze_four_cards(Cards); | |||||
_ -> analyze_complex_pattern(Cards) | |||||
end. | |||||
analyze_pair([{_, N}, {_, N}]) -> {pair, card_value({any, N})}; | |||||
analyze_pair(_) -> invalid. | |||||
analyze_triple([{_, N}, {_, N}, {_, N}]) -> {triple, card_value({any, N})}; | |||||
analyze_triple(_) -> invalid. | |||||
analyze_four_cards(Cards) -> | |||||
case group_cards(Cards) of | |||||
#{4 := [Value]} -> {bomb, card_value({any, Value})}; | |||||
_ -> analyze_four_with_two(Cards) | |||||
end. | |||||
analyze_complex_pattern(Cards) -> | |||||
case is_straight(Cards) of | |||||
true -> {straight, highest_card_value(Cards)}; | |||||
false -> analyze_other_patterns(Cards) | |||||
end. | |||||
%% 辅助函数 | |||||
card_value({_, "A"}) -> 14; | |||||
card_value({_, "2"}) -> 15; | |||||
card_value({_, "小王"}) -> 16; | |||||
card_value({_, "大王"}) -> 17; | |||||
card_value({_, N}) when is_list(N) -> | |||||
try list_to_integer(N) | |||||
catch _:_ -> | |||||
case N of | |||||
"J" -> 11; | |||||
"Q" -> 12; | |||||
"K" -> 13 | |||||
end | |||||
end. | |||||
sort_cards(Cards) -> | |||||
lists:sort(fun(A, B) -> card_value(A) =< card_value(B) end, Cards). | |||||
group_cards(Cards) -> | |||||
lists:foldl( | |||||
fun({_, N}, Acc) -> | |||||
maps:update_with(N, fun(L) -> [N|L] end, [N], Acc) | |||||
end, | |||||
#{}, | |||||
Cards | |||||
). | |||||
is_straight(Cards) -> | |||||
Values = [card_value(C) || C <- Cards], | |||||
Sorted = lists:sort(Values), | |||||
length(Sorted) >= 5 andalso | |||||
lists:all(fun({A, B}) -> B - A =:= 1 end, | |||||
lists:zip(Sorted, tl(Sorted))). |
@ -0,0 +1,171 @@ | |||||
-module(game_logic). | |||||
-export([validate_play/2, calculate_card_value/1, evaluate_hand_strength/1, | |||||
find_all_possible_patterns/1, analyze_card_pattern/1]). | |||||
-define(CARD_VALUES, #{ | |||||
"3" => 3, "4" => 4, "5" => 5, "6" => 6, "7" => 7, "8" => 8, "9" => 9, | |||||
"10" => 10, "J" => 11, "Q" => 12, "K" => 13, "A" => 14, "2" => 15, | |||||
"小王" => 16, "大王" => 17 | |||||
}). | |||||
%% 牌型定义 | |||||
-record(card_pattern, { | |||||
type, % single | pair | triple | triple_with_one | straight | | |||||
% straight_pair | airplane | airplane_with_wings | four_with_two | bomb | rocket | |||||
value, % 主牌值 | |||||
length = 1, % 顺子长度 | |||||
extra = [] % 附加牌 | |||||
}). | |||||
%% 验证出牌是否合法 | |||||
validate_play(Cards, LastPlay) -> | |||||
case analyze_card_pattern(Cards) of | |||||
{invalid, _} -> | |||||
false; | |||||
{Pattern, Value} -> | |||||
case analyze_card_pattern(LastPlay) of | |||||
{Pattern, LastValue} when Value > LastValue -> | |||||
true; | |||||
{_, _} -> | |||||
check_special_cases(Pattern, Value, LastPlay); | |||||
_ -> | |||||
false | |||||
end | |||||
end. | |||||
%% 分析牌型 | |||||
analyze_card_pattern(Cards) -> | |||||
Grouped = group_cards(Cards), | |||||
case identify_pattern(Grouped) of | |||||
{Type, Value} when Type =/= invalid -> | |||||
{#card_pattern{type = Type, value = Value}, Value}; | |||||
_ -> | |||||
{invalid, 0} | |||||
end. | |||||
%% 计算手牌价值 | |||||
calculate_card_value(Cards) -> | |||||
{Pattern, BaseValue} = analyze_card_pattern(Cards), | |||||
case Pattern#card_pattern.type of | |||||
bomb -> BaseValue * 2; | |||||
rocket -> 1000; | |||||
_ -> BaseValue | |||||
end. | |||||
%% 评估手牌强度 | |||||
evaluate_hand_strength(Cards) -> | |||||
Patterns = find_all_possible_patterns(Cards), | |||||
BaseScore = lists:foldl( | |||||
fun(Pattern, Acc) -> | |||||
Acc + calculate_pattern_value(Pattern) | |||||
end, | |||||
0, | |||||
Patterns | |||||
), | |||||
adjust_score_by_combination(BaseScore, Cards). | |||||
%% 找出所有可能的牌型组合 | |||||
find_all_possible_patterns(Cards) -> | |||||
Grouped = group_cards(Cards), | |||||
Singles = find_singles(Grouped), | |||||
Pairs = find_pairs(Grouped), | |||||
Triples = find_triples(Grouped), | |||||
Straights = find_straights(Cards), | |||||
StraightPairs = find_straight_pairs(Grouped), | |||||
Airplanes = find_airplanes(Grouped), | |||||
Bombs = find_bombs(Grouped), | |||||
Rockets = find_rockets(Cards), | |||||
Singles ++ Pairs ++ Triples ++ Straights ++ StraightPairs ++ | |||||
Airplanes ++ Bombs ++ Rockets. | |||||
%% 内部辅助函数 | |||||
group_cards(Cards) -> | |||||
lists:foldl( | |||||
fun({_, Number}, Acc) -> | |||||
maps:update_with( | |||||
Number, | |||||
fun(Count) -> Count + 1 end, | |||||
1, | |||||
Acc | |||||
) | |||||
end, | |||||
#{}, | |||||
Cards | |||||
). | |||||
identify_pattern(Grouped) -> | |||||
case maps:size(Grouped) of | |||||
1 -> | |||||
[{Number, Count}] = maps:to_list(Grouped), | |||||
case Count of | |||||
1 -> {single, maps:get(Number, ?CARD_VALUES)}; | |||||
2 -> {pair, maps:get(Number, ?CARD_VALUES)}; | |||||
3 -> {triple, maps:get(Number, ?CARD_VALUES)}; | |||||
4 -> {bomb, maps:get(Number, ?CARD_VALUES)}; | |||||
_ -> {invalid, 0} | |||||
end; | |||||
_ -> | |||||
identify_complex_pattern(Grouped) | |||||
end. | |||||
identify_complex_pattern(Grouped) -> | |||||
case find_straight_pattern(Grouped) of | |||||
{ok, Pattern} -> Pattern; | |||||
false -> | |||||
case find_airplane_pattern(Grouped) of | |||||
{ok, Pattern} -> Pattern; | |||||
false -> | |||||
case find_four_with_two(Grouped) of | |||||
{ok, Pattern} -> Pattern; | |||||
false -> {invalid, 0} | |||||
end | |||||
end | |||||
end. | |||||
find_straight_pattern(Grouped) -> | |||||
Numbers = lists:sort(maps:keys(Grouped)), | |||||
case is_consecutive(Numbers) of | |||||
true when length(Numbers) >= 5 -> | |||||
MaxValue = lists:max([maps:get(N, ?CARD_VALUES) || N <- Numbers]), | |||||
{ok, {straight, MaxValue}}; | |||||
_ -> | |||||
false | |||||
end. | |||||
is_consecutive(Numbers) -> | |||||
case Numbers of | |||||
[] -> true; | |||||
[_] -> true; | |||||
[First|Rest] -> | |||||
lists:all( | |||||
fun({A, B}) -> maps:get(B, ?CARD_VALUES) - maps:get(A, ?CARD_VALUES) =:= 1 end, | |||||
lists:zip(Numbers, Rest) | |||||
) | |||||
end. | |||||
calculate_pattern_value(Pattern) -> | |||||
case Pattern of | |||||
#card_pattern{type = single, value = V} -> V; | |||||
#card_pattern{type = pair, value = V} -> V * 2; | |||||
#card_pattern{type = triple, value = V} -> V * 3; | |||||
#card_pattern{type = bomb, value = V} -> V * 4; | |||||
#card_pattern{type = rocket} -> 1000; | |||||
#card_pattern{type = straight, value = V, length = L} -> V * L; | |||||
_ -> 0 | |||||
end. | |||||
adjust_score_by_combination(BaseScore, Cards) -> | |||||
CombinationBonus = case length(Cards) of | |||||
N when N =< 5 -> BaseScore * 1.2; | |||||
N when N =< 10 -> BaseScore * 1.1; | |||||
_ -> BaseScore | |||||
end, | |||||
round(CombinationBonus). | |||||
check_special_cases(Pattern, Value, LastPlay) -> | |||||
case Pattern#card_pattern.type of | |||||
bomb -> true; | |||||
rocket -> true; | |||||
_ -> false | |||||
end. |
@ -0,0 +1,51 @@ | |||||
-module(game_manager). | |||||
-export([start_game/3, handle_play/2, end_game/1]). | |||||
-record(game_manager_state, { | |||||
game_id, | |||||
players, | |||||
ai_players, | |||||
current_state, | |||||
history | |||||
}). | |||||
start_game(Player1, Player2, Player3) -> | |||||
% 初始化游戏状态 | |||||
GameState = game_core:init_game_state(), | |||||
% 初始化AI玩家 | |||||
AIPlayers = initialize_ai_players(), | |||||
% 创建游戏管理器状态 | |||||
GameManagerState = #game_manager_state{ | |||||
game_id = generate_game_id(), | |||||
players = [Player1, Player2, Player3], | |||||
ai_players = AIPlayers, | |||||
current_state = GameState, | |||||
history = [] | |||||
}, | |||||
% 开始游戏循环 | |||||
game_loop(GameManagerState). | |||||
handle_play(GameManagerState, Play) -> | |||||
% 验证玩家行动 | |||||
case game_core:validate_play(GameManagerState#game_manager_state.current_state, Play) of | |||||
true -> | |||||
% 更新游戏状态 | |||||
NewState = game_core:update_state(GameManagerState#game_manager_state.current_state, Play), | |||||
% 更新AI玩家 | |||||
UpdatedAIPlayers = update_ai_players(GameManagerState#game_manager_state.ai_players, Play), | |||||
% 记录历史 | |||||
NewHistory = [Play | GameManagerState#game_manager_state.history], | |||||
GameManagerState#game_manager_state{ | |||||
current_state = NewState, | |||||
ai_players = UpdatedAIPlayers, | |||||
history = NewHistory | |||||
}; | |||||
false -> | |||||
{error, invalid_play} | |||||
end. |
@ -0,0 +1,191 @@ | |||||
-module(game_server). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([create_game/0, join_game/2, start_game/1, play_cards/3, pass/2]). | |||||
-record(state, { | |||||
players = [], % [{PlayerPid, Cards}] | |||||
current_player = none, | |||||
last_play = [], | |||||
landlord = none, | |||||
game_status = waiting % waiting | playing | finished | |||||
}). | |||||
%% API | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
create_game() -> | |||||
gen_server:call(?MODULE, create_game). | |||||
join_game(GamePid, PlayerPid) -> | |||||
gen_server:call(GamePid, {join_game, PlayerPid}). | |||||
start_game(GamePid) -> | |||||
gen_server:call(GamePid, start_game). | |||||
play_cards(GamePid, PlayerPid, Cards) -> | |||||
gen_server:call(GamePid, {play_cards, PlayerPid, Cards}). | |||||
pass(GamePid, PlayerPid) -> | |||||
gen_server:call(GamePid, {pass, PlayerPid}). | |||||
%% Callbacks | |||||
init([]) -> | |||||
{ok, #state{}}. | |||||
handle_call(create_game, _From, State) -> | |||||
{reply, {ok, self()}, State#state{players = [], game_status = waiting}}; | |||||
handle_call({join_game, PlayerPid}, _From, State = #state{players = Players}) -> | |||||
case length(Players) < 3 of | |||||
true -> | |||||
NewPlayers = [{PlayerPid, []} | Players], | |||||
{reply, {ok, length(NewPlayers)}, State#state{players = NewPlayers}}; | |||||
false -> | |||||
{reply, {error, game_full}, State} | |||||
end; | |||||
handle_call(start_game, _From, State = #state{players = Players}) -> | |||||
case length(Players) =:= 3 of | |||||
true -> | |||||
% 初始化并洗牌 | |||||
Cards = cards:shuffle_cards(cards:init_cards()), | |||||
{P1Cards, P2Cards, P3Cards, LandlordCards} = cards:deal_cards(Cards), | |||||
% 随机选择地主 | |||||
LandlordIndex = rand:uniform(3), | |||||
{NewPlayers, NewLandlord} = assign_cards_and_landlord(Players, [P1Cards, P2Cards, P3Cards], LandlordCards, LandlordIndex), | |||||
{reply, {ok, NewLandlord}, State#state{ | |||||
players = NewPlayers, | |||||
current_player = NewLandlord, | |||||
game_status = playing, | |||||
landlord = NewLandlord | |||||
}}; | |||||
false -> | |||||
{reply, {error, not_enough_players}, State} | |||||
end; | |||||
handle_call({play_cards, PlayerPid, Cards}, _From, State = #state{current_player = PlayerPid, last_play = LastPlay, players = Players}) -> | |||||
case card_rules:validate_play(Cards, LastPlay) of | |||||
true -> | |||||
case remove_cards(PlayerPid, Cards, Players) of | |||||
{ok, NewPlayers} -> | |||||
NextPlayer = get_next_player(PlayerPid, Players), | |||||
case check_winner(NewPlayers) of | |||||
{winner, Winner} -> | |||||
{reply, {ok, winner, Winner}, State#state{ | |||||
players = NewPlayers, | |||||
game_status = finished | |||||
}}; | |||||
no_winner -> | |||||
{reply, {ok, next_player, NextPlayer}, State#state{ | |||||
players = NewPlayers, | |||||
current_player = NextPlayer, | |||||
last_play = Cards | |||||
}} | |||||
end; | |||||
error -> | |||||
{reply, {error, invalid_cards}, State} | |||||
end; | |||||
false -> | |||||
{reply, {error, invalid_play}, State} | |||||
end; | |||||
handle_call({pass, PlayerPid}, _From, State = #state{current_player = PlayerPid, last_play = LastPlay, players = Players}) -> | |||||
case LastPlay of | |||||
[] -> | |||||
{reply, {error, cannot_pass}, State}; | |||||
_ -> | |||||
NextPlayer = get_next_player(PlayerPid, Players), | |||||
{reply, {ok, next_player, NextPlayer}, State#state{current_player = NextPlayer}} | |||||
end; | |||||
handle_call(_, _From, State) -> | |||||
{reply, {error, invalid_call}, State}. | |||||
handle_cast(_, State) -> | |||||
{noreply, State}. | |||||
handle_info(_, State) -> | |||||
{noreply, State}. | |||||
terminate(_Reason, _State) -> | |||||
ok. | |||||
code_change(_OldVsn, State, _Extra) -> | |||||
{ok, State}. | |||||
%% 内部辅助函数 | |||||
%% 分配牌和地主 | |||||
assign_cards_and_landlord(Players, [P1, P2, P3], LandlordCards, LandlordIndex) -> | |||||
{Pid1, _} = lists:nth(1, Players), | |||||
{Pid2, _} = lists:nth(2, Players), | |||||
{Pid3, _} = lists:nth(3, Players), | |||||
case LandlordIndex of | |||||
1 -> | |||||
{[{Pid1, P1 ++ LandlordCards}, {Pid2, P2}, {Pid3, P3}], Pid1}; | |||||
2 -> | |||||
{[{Pid1, P1}, {Pid2, P2 ++ LandlordCards}, {Pid3, P3}], Pid2}; | |||||
3 -> | |||||
{[{Pid1, P1}, {Pid2, P2}, {Pid3, P3 ++ LandlordCards}], Pid3} | |||||
end. | |||||
%% 获取下一个玩家 | |||||
get_next_player(CurrentPid, Players) -> | |||||
PlayerPids = [Pid || {Pid, _} <- Players], | |||||
CurrentIndex = get_player_index(CurrentPid, PlayerPids), | |||||
lists:nth(1 + ((CurrentIndex) rem 3), PlayerPids). | |||||
%% 获取玩家索引 | |||||
get_player_index(Pid, Pids) -> | |||||
{Index, _} = lists:foldl( | |||||
fun(P, {Idx, Found}) -> | |||||
case P =:= Pid of | |||||
true -> {Idx, Idx}; | |||||
false -> {Idx + 1, Found} | |||||
end | |||||
end, | |||||
{1, none}, | |||||
Pids | |||||
), | |||||
Index. | |||||
%% 移除玩家手中的牌 | |||||
remove_cards(PlayerPid, CardsToRemove, Players) -> | |||||
case lists:keyfind(PlayerPid, 1, Players) of | |||||
{PlayerPid, PlayerCards} -> | |||||
case can_remove_cards(CardsToRemove, PlayerCards) of | |||||
true -> | |||||
NewCards = PlayerCards -- CardsToRemove, | |||||
NewPlayers = lists:keyreplace(PlayerPid, 1, Players, {PlayerPid, NewCards}), | |||||
{ok, NewPlayers}; | |||||
false -> | |||||
error | |||||
end; | |||||
false -> | |||||
error | |||||
end. | |||||
%% 检查是否能移除指定的牌 | |||||
can_remove_cards(CardsToRemove, PlayerCards) -> | |||||
lists:all( | |||||
fun(Card) -> | |||||
lists:member(Card, PlayerCards) | |||||
end, | |||||
CardsToRemove | |||||
). | |||||
%% 检查是否有获胜者 | |||||
check_winner(Players) -> | |||||
case lists:filter( | |||||
fun({_, Cards}) -> | |||||
length(Cards) =:= 0 | |||||
end, | |||||
Players | |||||
) of | |||||
[{Winner, _}|_] -> {winner, Winner}; | |||||
[] -> no_winner | |||||
end. |
@ -0,0 +1,112 @@ | |||||
-module(matrix). | |||||
-export([new/3, multiply/2, add/2, subtract/2, transpose/1, map/2]). | |||||
-export([from_list/1, to_list/1, get/3, set/4, shape/1]). | |||||
-record(matrix, { | |||||
rows, | |||||
cols, | |||||
data | |||||
}). | |||||
new(Rows, Cols, InitFun) when is_integer(Rows), is_integer(Cols), Rows > 0, Cols > 0 -> | |||||
Data = array:new(Rows * Cols, {default, 0.0}), | |||||
Data2 = case is_function(InitFun) of | |||||
true -> | |||||
lists:foldl( | |||||
fun(I, Acc) -> | |||||
lists:foldl( | |||||
fun(J, Acc2) -> | |||||
array:set(I * Cols + J, InitFun(), Acc2) | |||||
end, | |||||
Acc, | |||||
lists:seq(0, Cols-1) | |||||
) | |||||
end, | |||||
Data, | |||||
lists:seq(0, Rows-1) | |||||
); | |||||
false -> | |||||
array:set_value(InitFun, Data) | |||||
end, | |||||
#matrix{rows = Rows, cols = Cols, data = Data2}. | |||||
multiply(#matrix{rows = M, cols = N, data = Data1}, | |||||
#matrix{rows = N, cols = P, data = Data2}) -> | |||||
Result = array:new(M * P, {default, 0.0}), | |||||
ResultData = lists:foldl( | |||||
fun(I, Acc1) -> | |||||
lists:foldl( | |||||
fun(J, Acc2) -> | |||||
Sum = lists:sum([ | |||||
array:get(I * N + K, Data1) * array:get(K * P + J, Data2) | |||||
|| K <- lists:seq(0, N-1) | |||||
]), | |||||
array:set(I * P + J, Sum, Acc2) | |||||
end, | |||||
Acc1, | |||||
lists:seq(0, P-1) | |||||
) | |||||
end, | |||||
Result, | |||||
lists:seq(0, M-1) | |||||
), | |||||
#matrix{rows = M, cols = P, data = ResultData}. | |||||
add(#matrix{rows = R, cols = C, data = Data1}, | |||||
#matrix{rows = R, cols = C, data = Data2}) -> | |||||
NewData = array:map( | |||||
fun(I, V) -> V + array:get(I, Data2) end, | |||||
Data1 | |||||
), | |||||
#matrix{rows = R, cols = C, data = NewData}. | |||||
subtract(#matrix{rows = R, cols = C, data = Data1}, | |||||
#matrix{rows = R, cols = C, data = Data2}) -> | |||||
NewData = array:map( | |||||
fun(I, V) -> V - array:get(I, Data2) end, | |||||
Data1 | |||||
), | |||||
#matrix{rows = R, cols = C, data = NewData}. | |||||
transpose(#matrix{rows = R, cols = C, data = Data}) -> | |||||
NewData = array:new(R * C, {default, 0.0}), | |||||
TransposedData = lists:foldl( | |||||
fun(I, Acc1) -> | |||||
lists:foldl( | |||||
fun(J, Acc2) -> | |||||
array:set(J * R + I, array:get(I * C + J, Data), Acc2) | |||||
end, | |||||
Acc1, | |||||
lists:seq(0, C-1) | |||||
) | |||||
end, | |||||
NewData, | |||||
lists:seq(0, R-1) | |||||
), | |||||
#matrix{rows = C, cols = R, data = TransposedData}. | |||||
map(Fun, #matrix{rows = R, cols = C, data = Data}) -> | |||||
NewData = array:map(fun(_, V) -> Fun(V) end, Data), | |||||
#matrix{rows = R, cols = C, data = NewData}. | |||||
from_list(List) when is_list(List) -> | |||||
Rows = length(List), | |||||
Cols = length(hd(List)), | |||||
Data = array:from_list(lists:flatten(List)), | |||||
#matrix{rows = Rows, cols = Cols, data = Data}. | |||||
to_list(#matrix{rows = R, cols = C, data = Data}) -> | |||||
[ | |||||
[array:get(I * C + J, Data) || J <- lists:seq(0, C-1)] | |||||
|| I <- lists:seq(0, R-1) | |||||
]. | |||||
get(#matrix{cols = C, data = Data}, Row, Col) -> | |||||
array:get(Row * C + Col, Data). | |||||
set(#matrix{cols = C, data = Data} = M, Row, Col, Value) -> | |||||
NewData = array:set(Row * C + Col, Value, Data), | |||||
M#matrix{data = NewData}. | |||||
shape(#matrix{rows = R, cols = C}) -> | |||||
{R, C}. |
@ -0,0 +1,157 @@ | |||||
-module(ml_engine). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([train/2, predict/2, update_model/3, get_model_stats/1]). | |||||
-record(state, { | |||||
models = #{}, % Map: ModelName -> ModelData | |||||
training_data = #{}, % Map: ModelName -> TrainingData | |||||
model_stats = #{}, % Map: ModelName -> Stats | |||||
last_update = undefined | |||||
}). | |||||
-record(model_data, { | |||||
weights = #{}, % 模型权重 | |||||
features = [], % 特征列表 | |||||
learning_rate = 0.01, % 学习率 | |||||
iterations = 0, % 训练迭代次数 | |||||
accuracy = 0.0 % 模型准确率 | |||||
}). | |||||
%% API | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
train(ModelName, TrainingData) -> | |||||
gen_server:call(?MODULE, {train, ModelName, TrainingData}). | |||||
predict(ModelName, Features) -> | |||||
gen_server:call(?MODULE, {predict, ModelName, Features}). | |||||
update_model(ModelName, NewData, Reward) -> | |||||
gen_server:cast(?MODULE, {update_model, ModelName, NewData, Reward}). | |||||
get_model_stats(ModelName) -> | |||||
gen_server:call(?MODULE, {get_stats, ModelName}). | |||||
%% Callbacks | |||||
init([]) -> | |||||
% 初始化各种策略模型 | |||||
Models = initialize_models(), | |||||
{ok, #state{models = Models, last_update = os:timestamp()}}. | |||||
handle_call({train, ModelName, TrainingData}, _From, State) -> | |||||
{NewModel, Stats} = train_model(ModelName, TrainingData, State), | |||||
NewModels = maps:put(ModelName, NewModel, State#state.models), | |||||
NewStats = maps:put(ModelName, Stats, State#state.model_stats), | |||||
{reply, {ok, Stats}, State#state{models = NewModels, model_stats = NewStats}}; | |||||
handle_call({predict, ModelName, Features}, _From, State) -> | |||||
case maps:get(ModelName, State#state.models, undefined) of | |||||
undefined -> | |||||
{reply, {error, model_not_found}, State}; | |||||
Model -> | |||||
Prediction = make_prediction(Model, Features), | |||||
{reply, {ok, Prediction}, State} | |||||
end; | |||||
handle_call({get_stats, ModelName}, _From, State) -> | |||||
Stats = maps:get(ModelName, State#state.model_stats, undefined), | |||||
{reply, {ok, Stats}, State}. | |||||
handle_cast({update_model, ModelName, NewData, Reward}, State) -> | |||||
Model = maps:get(ModelName, State#state.models), | |||||
UpdatedModel = update_model_weights(Model, NewData, Reward), | |||||
NewModels = maps:put(ModelName, UpdatedModel, State#state.models), | |||||
{noreply, State#state{models = NewModels}}. | |||||
%% 内部函数 | |||||
initialize_models() -> | |||||
Models = #{ | |||||
play_strategy => init_play_strategy_model(), | |||||
card_combination => init_card_combination_model(), | |||||
opponent_prediction => init_opponent_prediction_model(), | |||||
game_state_evaluation => init_game_state_model() | |||||
}, | |||||
Models. | |||||
init_play_strategy_model() -> | |||||
#model_data{ | |||||
features = [ | |||||
remaining_cards, | |||||
opponent_cards, | |||||
current_position, | |||||
game_stage, | |||||
last_play_type, | |||||
has_control | |||||
] | |||||
}. | |||||
init_card_combination_model() -> | |||||
#model_data{ | |||||
features = [ | |||||
card_count, | |||||
card_types, | |||||
sequence_length, | |||||
combination_value | |||||
] | |||||
}. | |||||
init_opponent_prediction_model() -> | |||||
#model_data{ | |||||
features = [ | |||||
played_cards, | |||||
remaining_unknown, | |||||
player_position, | |||||
playing_pattern | |||||
] | |||||
}. | |||||
init_game_state_model() -> | |||||
#model_data{ | |||||
features = [ | |||||
cards_played, | |||||
cards_remaining, | |||||
player_positions, | |||||
game_control | |||||
] | |||||
}. | |||||
train_model(ModelName, TrainingData, State) -> | |||||
Model = maps:get(ModelName, State#state.models), | |||||
{UpdatedModel, Stats} = case ModelName of | |||||
play_strategy -> | |||||
train_play_strategy(Model, TrainingData); | |||||
card_combination -> | |||||
train_card_combination(Model, TrainingData); | |||||
opponent_prediction -> | |||||
train_opponent_prediction(Model, TrainingData); | |||||
game_state_evaluation -> | |||||
train_game_state(Model, TrainingData) | |||||
end, | |||||
{UpdatedModel, Stats}. | |||||
make_prediction(Model, Features) -> | |||||
% 使用模型权重和特征进行预测 | |||||
Weights = Model#model_data.weights, | |||||
calculate_prediction(Features, Weights). | |||||
update_model_weights(Model, NewData, Reward) -> | |||||
% 使用强化学习更新模型权重 | |||||
CurrentWeights = Model#model_data.weights, | |||||
LearningRate = Model#model_data.learning_rate, | |||||
UpdatedWeights = apply_reinforcement_learning(CurrentWeights, NewData, Reward, LearningRate), | |||||
Model#model_data{weights = UpdatedWeights, iterations = Model#model_data.iterations + 1}. | |||||
calculate_prediction(Features, Weights) -> | |||||
% 实现预测算法 | |||||
lists:foldl( | |||||
fun({Feature, Value}, Acc) -> | |||||
Weight = maps:get(Feature, Weights, 0), | |||||
Acc + (Value * Weight) | |||||
end, | |||||
0, | |||||
Features | |||||
). |
@ -0,0 +1,62 @@ | |||||
-module(opponent_modeling). | |||||
-export([create_model/0, update_model/2, analyze_opponent/2, predict_play/2]). | |||||
-record(opponent_model, { | |||||
play_patterns = #{}, % 出牌模式统计 | |||||
card_preferences = #{}, % 牌型偏好 | |||||
risk_profile = 0.5, % 风险偏好 | |||||
skill_rating = 500, % 技能评分 | |||||
play_history = [] % 历史出牌记录 | |||||
}). | |||||
create_model() -> | |||||
#opponent_model{}. | |||||
update_model(Model, GamePlay) -> | |||||
% 更新出牌模式统计 | |||||
NewPatterns = update_play_patterns(Model#opponent_model.play_patterns, GamePlay), | |||||
% 更新牌型偏好 | |||||
NewPreferences = update_card_preferences(Model#opponent_model.card_preferences, GamePlay), | |||||
% 更新风险偏好 | |||||
NewRiskProfile = calculate_risk_profile(Model#opponent_model.risk_profile, GamePlay), | |||||
% 更新技能评分 | |||||
NewSkillRating = update_skill_rating(Model#opponent_model.skill_rating, GamePlay), | |||||
% 更新历史记录 | |||||
NewHistory = [GamePlay | Model#opponent_model.play_history], | |||||
Model#opponent_model{ | |||||
play_patterns = NewPatterns, | |||||
card_preferences = NewPreferences, | |||||
risk_profile = NewRiskProfile, | |||||
skill_rating = NewSkillRating, | |||||
play_history = lists:sublist(NewHistory, 100) % 保留最近100次出牌记录 | |||||
}. | |||||
analyze_opponent(Model, GameState) -> | |||||
#{ | |||||
style => determine_play_style(Model), | |||||
strength => calculate_opponent_strength(Model), | |||||
predictability => calculate_predictability(Model), | |||||
weakness => identify_weaknesses(Model) | |||||
}. | |||||
predict_play(Model, GameState) -> | |||||
% 基于历史模式预测 | |||||
HistoryBasedPrediction = predict_from_history(Model, GameState), | |||||
% 基于牌型偏好预测 | |||||
PreferenceBasedPrediction = predict_from_preferences(Model, GameState), | |||||
% 基于风险偏好预测 | |||||
RiskBasedPrediction = predict_from_risk_profile(Model, GameState), | |||||
% 综合预测结果 | |||||
combine_predictions([ | |||||
{HistoryBasedPrediction, 0.4}, | |||||
{PreferenceBasedPrediction, 0.3}, | |||||
{RiskBasedPrediction, 0.3} | |||||
]). |
@ -0,0 +1,55 @@ | |||||
-module(optimizer). | |||||
-export([update_adam/3, update_sgd/3, init_adam_state/0]). | |||||
-record(adam_state, { | |||||
m = #{}, % First moment | |||||
v = #{}, % Second moment | |||||
t = 0, % Timestamp | |||||
beta1 = 0.9, % Exponential decay rate for first moment | |||||
beta2 = 0.999, % Exponential decay rate for second moment | |||||
epsilon = 1.0e-8 | |||||
}). | |||||
init_adam_state() -> | |||||
#adam_state{}. | |||||
update_adam(Params, Gradients, State = #adam_state{t = T}) -> | |||||
NewT = T + 1, | |||||
{NewParams, NewM, NewV} = maps:fold( | |||||
fun(Key, Grad, {ParamsAcc, MAcc, VAcc}) -> | |||||
M = maps:get(Key, State#adam_state.m, 0.0), | |||||
V = maps:get(Key, State#adam_state.v, 0.0), | |||||
% Update biased first moment estimate | |||||
NewM1 = State#adam_state.beta1 * M + (1 - State#adam_state.beta1) * Grad, | |||||
% Update biased second moment estimate | |||||
NewV1 = State#adam_state.beta2 * V + (1 - State#adam_state.beta2) * Grad * Grad, | |||||
% Compute bias-corrected first moment estimate | |||||
MHat = NewM1 / (1 - math:pow(State#adam_state.beta1, NewT)), | |||||
% Compute bias-corrected second moment estimate | |||||
VHat = NewV1 / (1 - math:pow(State#adam_state.beta2, NewT)), | |||||
% Update parameters | |||||
Param = maps:get(Key, Params), | |||||
NewParam = Param - State#adam_state.epsilon * MHat / (math:sqrt(VHat) + State#adam_state.epsilon), | |||||
{ | |||||
maps:put(Key, NewParam, ParamsAcc), | |||||
maps:put(Key, NewM1, MAcc), | |||||
maps:put(Key, NewV1, VAcc) | |||||
} | |||||
end, | |||||
{Params, State#adam_state.m, State#adam_state.v}, | |||||
Gradients | |||||
), | |||||
{NewParams, State#adam_state{m = NewM, v = NewV, t = NewT}}. | |||||
update_sgd(Params, Gradients, LearningRate) -> | |||||
maps:map( | |||||
fun(Key, Param) -> | |||||
Grad = maps:get(Key, Gradients), | |||||
Param - LearningRate * Grad | |||||
end, | |||||
Params | |||||
). |
@ -0,0 +1,55 @@ | |||||
-module(parallel_compute). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([parallel_predict/2, batch_process/2]). | |||||
-record(state, { | |||||
worker_pool = [], % 工作进程池 | |||||
job_queue = [], % 任务队列 | |||||
results = #{}, % 结果集 | |||||
pool_size = 4 % 默认工作进程数 | |||||
}). | |||||
%% API | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
parallel_predict(Inputs, Model) -> | |||||
gen_server:call(?MODULE, {parallel_predict, Inputs, Model}). | |||||
batch_process(BatchData, ProcessFun) -> | |||||
gen_server:call(?MODULE, {batch_process, BatchData, ProcessFun}). | |||||
%% 内部函数 | |||||
initialize_worker_pool(PoolSize) -> | |||||
[spawn_worker() || _ <- lists:seq(1, PoolSize)]. | |||||
spawn_worker() -> | |||||
spawn_link(fun() -> worker_loop() end). | |||||
worker_loop() -> | |||||
receive | |||||
{process, Data, From} -> | |||||
Result = process_data(Data), | |||||
From ! {result, self(), Result}, | |||||
worker_loop(); | |||||
stop -> | |||||
ok | |||||
end. | |||||
process_data({predict, Input, Model}) -> | |||||
deep_learning:predict(Model, Input); | |||||
process_data({custom, Fun, Data}) -> | |||||
Fun(Data). | |||||
distribute_work(Workers, Jobs) -> | |||||
distribute_work(Workers, Jobs, #{}). | |||||
distribute_work(_, [], Results) -> | |||||
Results; | |||||
distribute_work(Workers, [Job|Jobs], Results) -> | |||||
[Worker|RestWorkers] = Workers, | |||||
Worker ! {process, Job, self()}, | |||||
distribute_work(RestWorkers ++ [Worker], Jobs, Results). |
@ -0,0 +1,76 @@ | |||||
-module(performance_monitor). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([start_monitoring/1, stop_monitoring/1, get_metrics/1, generate_report/1]). | |||||
-record(state, { | |||||
monitors = #{}, % 监控对象集合 | |||||
metrics = #{}, % 性能指标数据 | |||||
alerts = [], % 告警信息 | |||||
start_time = undefined | |||||
}). | |||||
-record(monitor_data, { | |||||
type, % 监控类型 | |||||
metrics = [], % 指标列表 | |||||
threshold = #{}, % 阈值设置 | |||||
callback % 回调函数 | |||||
}). | |||||
%% API | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
start_monitoring(Target) -> | |||||
gen_server:call(?MODULE, {start_monitoring, Target}). | |||||
stop_monitoring(Target) -> | |||||
gen_server:call(?MODULE, {stop_monitoring, Target}). | |||||
get_metrics(Target) -> | |||||
gen_server:call(?MODULE, {get_metrics, Target}). | |||||
generate_report(Target) -> | |||||
gen_server:call(?MODULE, {generate_report, Target}). | |||||
%% 内部函数 | |||||
collect_metrics(Target) -> | |||||
% 收集各种性能指标 | |||||
#{ | |||||
cpu_usage => get_cpu_usage(Target), | |||||
memory_usage => get_memory_usage(Target), | |||||
response_time => get_response_time(Target), | |||||
throughput => get_throughput(Target) | |||||
}. | |||||
analyze_performance(Metrics) -> | |||||
% 分析性能数据 | |||||
#{ | |||||
avg_response_time => calculate_average(maps:get(response_time, Metrics)), | |||||
peak_memory => get_peak_value(maps:get(memory_usage, Metrics)), | |||||
bottlenecks => identify_bottlenecks(Metrics) | |||||
}. | |||||
generate_alerts(Metrics, Thresholds) -> | |||||
% 生成性能告警 | |||||
lists:filtermap( | |||||
fun({Metric, Value}) -> | |||||
case check_threshold(Metric, Value, Thresholds) of | |||||
{true, Alert} -> {true, Alert}; | |||||
false -> false | |||||
end | |||||
end, | |||||
maps:to_list(Metrics) | |||||
). | |||||
create_report(Target, Metrics) -> | |||||
% 生成性能报告 | |||||
#{ | |||||
target => Target, | |||||
timestamp => os:timestamp(), | |||||
metrics => Metrics, | |||||
analysis => analyze_performance(Metrics), | |||||
recommendations => generate_recommendations(Metrics) | |||||
}. |
@ -0,0 +1,37 @@ | |||||
-module(performance_optimization). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2]). | |||||
-export([optimize_resources/0, get_performance_stats/0]). | |||||
-record(state, { | |||||
resource_usage = #{}, | |||||
optimization_rules = #{}, | |||||
performance_history = [] | |||||
}). | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
init([]) -> | |||||
schedule_optimization(), | |||||
{ok, #state{}}. | |||||
optimize_resources() -> | |||||
gen_server:cast(?MODULE, optimize). | |||||
get_performance_stats() -> | |||||
gen_server:call(?MODULE, get_stats). | |||||
%% 内部函数 | |||||
schedule_optimization() -> | |||||
erlang:send_after(60000, self(), run_optimization). | |||||
analyze_resource_usage() -> | |||||
{ok, Usage} = cpu_sup:util([detailed]), | |||||
{ok, Memory} = memsup:get_system_memory_data(), | |||||
#{ | |||||
cpu => Usage, | |||||
memory => Memory, | |||||
process_count => erlang:system_info(process_count) | |||||
}. |
@ -0,0 +1,62 @@ | |||||
%% 在 record state 中添加新的字段 | |||||
-record(state, { | |||||
name, | |||||
game_pid, | |||||
cards = [], | |||||
score = 0, | |||||
wins = 0, | |||||
losses = 0 | |||||
}). | |||||
%% 添加新的API函数 | |||||
get_name(PlayerPid) -> | |||||
gen_server:call(PlayerPid, get_name). | |||||
get_statistics(PlayerPid) -> | |||||
gen_server:call(PlayerPid, get_statistics). | |||||
%% 在 init/1 中初始化积分 | |||||
init([Name]) -> | |||||
case score_system:get_score(Name) of | |||||
{ok, {Score, Wins, Losses}} -> | |||||
{ok, #state{name = Name, score = Score, wins = Wins, losses = Losses}}; | |||||
_ -> | |||||
{ok, #state{name = Name}} | |||||
end. | |||||
%% 添加新的处理函数 | |||||
handle_call(get_name, _From, State) -> | |||||
{reply, {ok, State#state.name}, State}; | |||||
handle_call(get_statistics, _From, State) -> | |||||
Stats = {State#state.score, State#state.wins, State#state.losses}, | |||||
{reply, {ok, Stats}, State}; | |||||
%% 修改游戏结束处理 | |||||
handle_cast({game_end, Winner}, State = #state{name = Name}) -> | |||||
Points = calculate_points(Winner, State#state.name), | |||||
GameResult = case Name =:= Winner of | |||||
true -> win; | |||||
false -> loss | |||||
end, | |||||
{ok, {NewScore, NewWins, NewLosses}} = | |||||
score_system:update_score(Name, GameResult, Points), | |||||
{noreply, State#state{ | |||||
score = NewScore, | |||||
wins = NewWins, | |||||
losses = NewLosses | |||||
}}. | |||||
%% 添加内部辅助函数 | |||||
calculate_points(Winner, PlayerName) -> | |||||
case {Winner =:= PlayerName, is_landlord(PlayerName)} of | |||||
{true, true} -> 3; % 地主赢 | |||||
{true, false} -> 2; % 农民赢 | |||||
{false, true} -> -3; % 地主输 | |||||
{false, false} -> -2 % 农民输 | |||||
end. | |||||
is_landlord(PlayerName) -> | |||||
%% 根据游戏状态判断是否为地主 | |||||
%% 这里需要与 game_server 配合实现 | |||||
false. |
@ -0,0 +1,121 @@ | |||||
-module(room_manager). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([create_room/2, list_rooms/0, join_room/2, leave_room/2, delete_room/1]). | |||||
-record(state, { | |||||
rooms = #{} % Map: RoomId -> {RoomName, GamePid, Players, Status} | |||||
}). | |||||
-record(room, { | |||||
id, | |||||
name, | |||||
game_pid, | |||||
players = [], % [{PlayerPid, PlayerName}] | |||||
status = waiting % waiting | playing | |||||
}). | |||||
%% API | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
create_room(RoomName, CreatorPid) -> | |||||
gen_server:call(?MODULE, {create_room, RoomName, CreatorPid}). | |||||
list_rooms() -> | |||||
gen_server:call(?MODULE, list_rooms). | |||||
join_room(RoomId, PlayerPid) -> | |||||
gen_server:call(?MODULE, {join_room, RoomId, PlayerPid}). | |||||
leave_room(RoomId, PlayerPid) -> | |||||
gen_server:call(?MODULE, {leave_room, RoomId, PlayerPid}). | |||||
delete_room(RoomId) -> | |||||
gen_server:call(?MODULE, {delete_room, RoomId}). | |||||
%% Callbacks | |||||
init([]) -> | |||||
{ok, #state{}}. | |||||
handle_call({create_room, RoomName, CreatorPid}, _From, State = #state{rooms = Rooms}) -> | |||||
RoomId = generate_room_id(), | |||||
{ok, GamePid} = game_server:start_link(), | |||||
NewRoom = #room{ | |||||
id = RoomId, | |||||
name = RoomName, | |||||
game_pid = GamePid, | |||||
players = [{CreatorPid, get_player_name(CreatorPid)}] | |||||
}, | |||||
NewRooms = maps:put(RoomId, NewRoom, Rooms), | |||||
{reply, {ok, RoomId}, State#state{rooms = NewRooms}}; | |||||
handle_call(list_rooms, _From, State = #state{rooms = Rooms}) -> | |||||
RoomList = maps:fold( | |||||
fun(RoomId, Room, Acc) -> | |||||
[{RoomId, Room#room.name, length(Room#room.players), Room#room.status} | Acc] | |||||
end, | |||||
[], | |||||
Rooms | |||||
), | |||||
{reply, {ok, RoomList}, State}; | |||||
handle_call({join_room, RoomId, PlayerPid}, _From, State = #state{rooms = Rooms}) -> | |||||
case maps:find(RoomId, Rooms) of | |||||
{ok, Room = #room{players = Players, status = waiting}} -> | |||||
case length(Players) < 3 of | |||||
true -> | |||||
NewPlayers = Players ++ [{PlayerPid, get_player_name(PlayerPid)}], | |||||
NewRoom = Room#room{players = NewPlayers}, | |||||
NewRooms = maps:put(RoomId, NewRoom, Rooms), | |||||
{reply, {ok, Room#room.game_pid}, State#state{rooms = NewRooms}}; | |||||
false -> | |||||
{reply, {error, room_full}, State} | |||||
end; | |||||
{ok, #room{status = playing}} -> | |||||
{reply, {error, game_in_progress}, State}; | |||||
error -> | |||||
{reply, {error, room_not_found}, State} | |||||
end; | |||||
handle_call({leave_room, RoomId, PlayerPid}, _From, State = #state{rooms = Rooms}) -> | |||||
case maps:find(RoomId, Rooms) of | |||||
{ok, Room = #room{players = Players}} -> | |||||
NewPlayers = lists:keydelete(PlayerPid, 1, Players), | |||||
NewRoom = Room#room{players = NewPlayers}, | |||||
NewRooms = maps:put(RoomId, NewRoom, Rooms), | |||||
{reply, ok, State#state{rooms = NewRooms}}; | |||||
error -> | |||||
{reply, {error, room_not_found}, State} | |||||
end; | |||||
handle_call({delete_room, RoomId}, _From, State = #state{rooms = Rooms}) -> | |||||
case maps:find(RoomId, Rooms) of | |||||
{ok, Room} -> | |||||
gen_server:stop(Room#room.game_pid), | |||||
NewRooms = maps:remove(RoomId, Rooms), | |||||
{reply, ok, State#state{rooms = NewRooms}}; | |||||
error -> | |||||
{reply, {error, room_not_found}, State} | |||||
end. | |||||
handle_cast(_Msg, State) -> | |||||
{noreply, State}. | |||||
handle_info(_Info, State) -> | |||||
{noreply, State}. | |||||
terminate(_Reason, _State) -> | |||||
ok. | |||||
code_change(_OldVsn, State, _Extra) -> | |||||
{ok, State}. | |||||
%% 内部辅助函数 | |||||
generate_room_id() -> | |||||
erlang:system_time(). | |||||
get_player_name(PlayerPid) -> | |||||
{ok, Name} = player:get_name(PlayerPid), | |||||
Name. |
@ -0,0 +1,80 @@ | |||||
-module(score_system). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([get_score/1, update_score/3, get_leaderboard/0]). | |||||
-record(state, { | |||||
scores = #{}, % Map: PlayerName -> {Score, Wins, Losses} | |||||
leaderboard = [] | |||||
}). | |||||
%% API | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
get_score(PlayerName) -> | |||||
gen_server:call(?MODULE, {get_score, PlayerName}). | |||||
update_score(PlayerName, GameResult, Points) -> | |||||
gen_server:call(?MODULE, {update_score, PlayerName, GameResult, Points}). | |||||
get_leaderboard() -> | |||||
gen_server:call(?MODULE, get_leaderboard). | |||||
%% Callbacks | |||||
init([]) -> | |||||
{ok, #state{scores = #{}}}. | |||||
handle_call({get_score, PlayerName}, _From, State = #state{scores = Scores}) -> | |||||
case maps:find(PlayerName, Scores) of | |||||
{ok, ScoreData} -> | |||||
{reply, {ok, ScoreData}, State}; | |||||
error -> | |||||
{reply, {ok, {0, 0, 0}}, State} | |||||
end; | |||||
handle_call({update_score, PlayerName, GameResult, Points}, _From, State = #state{scores = Scores}) -> | |||||
{Score, Wins, Losses} = case maps:find(PlayerName, Scores) of | |||||
{ok, {OldScore, OldWins, OldLosses}} -> | |||||
{OldScore, OldWins, OldLosses}; | |||||
error -> | |||||
{0, 0, 0} | |||||
end, | |||||
{NewScore, NewWins, NewLosses} = case GameResult of | |||||
win -> {Score + Points, Wins + 1, Losses}; | |||||
loss -> {Score - Points, Wins, Losses + 1} | |||||
end, | |||||
NewScores = maps:put(PlayerName, {NewScore, NewWins, NewLosses}, Scores), | |||||
NewLeaderboard = update_leaderboard(NewScores), | |||||
{reply, {ok, {NewScore, NewWins, NewLosses}}, | |||||
State#state{scores = NewScores, leaderboard = NewLeaderboard}}; | |||||
handle_call(get_leaderboard, _From, State = #state{leaderboard = Leaderboard}) -> | |||||
{reply, {ok, Leaderboard}, State}. | |||||
handle_cast(_Msg, State) -> | |||||
{noreply, State}. | |||||
handle_info(_Info, State) -> | |||||
{noreply, State}. | |||||
terminate(_Reason, _State) -> | |||||
ok. | |||||
code_change(_OldVsn, State, _Extra) -> | |||||
{ok, State}. | |||||
%% 内部辅助函数 | |||||
update_leaderboard(Scores) -> | |||||
List = maps:to_list(Scores), | |||||
SortedList = lists:sort( | |||||
fun({_, {Score1, _, _}}, {_, {Score2, _, _}}) -> | |||||
Score1 >= Score2 | |||||
end, | |||||
List | |||||
), | |||||
lists:sublist(SortedList, 10). |
@ -0,0 +1,53 @@ | |||||
-module(strategy_optimizer). | |||||
-export([optimize_strategy/2, evaluate_strategy/2, adapt_strategy/3]). | |||||
-record(strategy_state, { | |||||
current_strategy, | |||||
performance_metrics, | |||||
adaptation_rate, | |||||
optimization_history | |||||
}). | |||||
optimize_strategy(Strategy, GameState) -> | |||||
% 分析当前局势 | |||||
SituationAnalysis = analyze_current_situation(GameState), | |||||
% 生成策略变体 | |||||
StrategyVariants = generate_strategy_variants(Strategy, SituationAnalysis), | |||||
% 评估所有变体 | |||||
EvaluatedVariants = evaluate_strategy_variants(StrategyVariants, GameState), | |||||
% 选择最佳变体 | |||||
select_best_strategy(EvaluatedVariants). | |||||
evaluate_strategy(Strategy, GameState) -> | |||||
% 评估控制能力 | |||||
ControlScore = evaluate_control_ability(Strategy, GameState), | |||||
% 评估节奏把握 | |||||
TempoScore = evaluate_tempo_management(Strategy, GameState), | |||||
% 评估风险管理 | |||||
RiskScore = evaluate_risk_management(Strategy, GameState), | |||||
% 评估资源利用 | |||||
ResourceScore = evaluate_resource_utilization(Strategy, GameState), | |||||
% 综合评分 | |||||
calculate_overall_score([ | |||||
{ControlScore, 0.3}, | |||||
{TempoScore, 0.25}, | |||||
{RiskScore, 0.25}, | |||||
{ResourceScore, 0.2} | |||||
]). | |||||
adapt_strategy(Strategy, GameState, Performance) -> | |||||
% 分析性能指标 | |||||
PerformanceAnalysis = analyze_performance(Performance), | |||||
% 确定调整方向 | |||||
AdjustmentDirection = determine_adjustment(PerformanceAnalysis), | |||||
% 生成适应性调整 | |||||
generate_adapted_strategy(Strategy, AdjustmentDirection, GameState). |
@ -0,0 +1,44 @@ | |||||
-module(system_supervisor). | |||||
-behaviour(supervisor). | |||||
-export([start_link/0, init/1]). | |||||
-export([start_system/0, stop_system/0, system_status/0]). | |||||
start_link() -> | |||||
supervisor:start_link({local, ?MODULE}, ?MODULE, []). | |||||
init([]) -> | |||||
SupFlags = #{ | |||||
strategy => one_for_one, | |||||
intensity => 10, | |||||
period => 60 | |||||
}, | |||||
Children = [ | |||||
#{ | |||||
id => game_manager, | |||||
start => {game_manager, start_link, []}, | |||||
restart => permanent, | |||||
shutdown => 5000, | |||||
type => worker, | |||||
modules => [game_manager] | |||||
}, | |||||
#{ | |||||
id => player_manager, | |||||
start => {player_manager, start_link, []}, | |||||
restart => permanent, | |||||
shutdown => 5000, | |||||
type => worker, | |||||
modules => [player_manager] | |||||
}, | |||||
#{ | |||||
id => ai_supervisor, | |||||
start => {ai_supervisor, start_link, []}, | |||||
restart => permanent, | |||||
shutdown => 5000, | |||||
type => supervisor, | |||||
modules => [ai_supervisor] | |||||
} | |||||
], | |||||
{ok, {SupFlags, Children}}. |
@ -0,0 +1,37 @@ | |||||
-module(test_suite). | |||||
-export([run_full_test/0, validate_ai_performance/1]). | |||||
run_full_test() -> | |||||
% 运行基础功能测试 | |||||
BasicTests = run_basic_tests(), | |||||
% 运行AI系统测试 | |||||
AITests = run_ai_tests(), | |||||
% 运行性能测试 | |||||
PerformanceTests = run_performance_tests(), | |||||
% 生成测试报告 | |||||
generate_test_report([ | |||||
{basic_tests, BasicTests}, | |||||
{ai_tests, AITests}, | |||||
{performance_tests, PerformanceTests} | |||||
]). | |||||
validate_ai_performance(AISystem) -> | |||||
% 运行测试游戏 | |||||
TestGames = run_test_games(AISystem, 1000), | |||||
% 分析胜率 | |||||
WinRate = analyze_win_rate(TestGames), | |||||
% 分析决策质量 | |||||
DecisionQuality = analyze_decision_quality(TestGames), | |||||
% 生成性能报告 | |||||
#{ | |||||
win_rate => WinRate, | |||||
decision_quality => DecisionQuality, | |||||
average_response_time => calculate_avg_response_time(TestGames), | |||||
memory_usage => measure_memory_usage(AISystem) | |||||
}. |
@ -0,0 +1,25 @@ | |||||
-module(training_system). | |||||
-export([start_training/0, process_game_data/1, update_models/1]). | |||||
%% 开始训练过程 | |||||
start_training() -> | |||||
TrainingData = load_training_data(), | |||||
Models = initialize_models(), | |||||
train_models(Models, TrainingData, [ | |||||
{epochs, 1000}, | |||||
{batch_size, 32}, | |||||
{learning_rate, 0.001} | |||||
]). | |||||
%% 处理游戏数据 | |||||
process_game_data(GameRecord) -> | |||||
Features = extract_features(GameRecord), | |||||
Labels = extract_labels(GameRecord), | |||||
update_training_dataset(Features, Labels). | |||||
%% 更新模型 | |||||
update_models(NewData) -> | |||||
CurrentModels = get_current_models(), | |||||
UpdatedModels = retrain_models(CurrentModels, NewData), | |||||
validate_models(UpdatedModels), | |||||
deploy_models(UpdatedModels). |
@ -0,0 +1,58 @@ | |||||
-module(visualization). | |||||
-behaviour(gen_server). | |||||
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). | |||||
-export([create_chart/2, update_chart/2, export_chart/2]). | |||||
-record(state, { | |||||
charts = #{}, % 图表集合 | |||||
renderers = #{}, % 渲染器 | |||||
export_formats = [png, svg, pdf] | |||||
}). | |||||
%% API | |||||
start_link() -> | |||||
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). | |||||
create_chart(ChartType, Data) -> | |||||
gen_server:call(?MODULE, {create_chart, ChartType, Data}). | |||||
update_chart(ChartId, NewData) -> | |||||
gen_server:call(?MODULE, {update_chart, ChartId, NewData}). | |||||
export_chart(ChartId, Format) -> | |||||
gen_server:call(?MODULE, {export_chart, ChartId, Format}). | |||||
%% 内部函数 | |||||
initialize_renderers() -> | |||||
#{ | |||||
line_chart => fun draw_line_chart/2, | |||||
bar_chart => fun draw_bar_chart/2, | |||||
pie_chart => fun draw_pie_chart/2, | |||||
scatter_plot => fun draw_scatter_plot/2 | |||||
}. | |||||
draw_line_chart(Data, Options) -> | |||||
% 实现线图绘制 | |||||
{ok, generate_line_chart(Data, Options)}. | |||||
draw_bar_chart(Data, Options) -> | |||||
% 实现柱状图绘制 | |||||
{ok, generate_bar_chart(Data, Options)}. | |||||
draw_pie_chart(Data, Options) -> | |||||
% 实现饼图绘制 | |||||
{ok, generate_pie_chart(Data, Options)}. | |||||
draw_scatter_plot(Data, Options) -> | |||||
% 实现散点图绘制 | |||||
{ok, generate_scatter_plot(Data, Options)}. | |||||
export_to_format(Chart, Format) -> | |||||
% 导出图表到指定格式 | |||||
case Format of | |||||
png -> export_to_png(Chart); | |||||
svg -> export_to_svg(Chart); | |||||
pdf -> export_to_pdf(Chart) | |||||
end. |
@ -0,0 +1,192 @@ | |||||
# 自动斗地主AI系统项目文档 | |||||
**文档生成日期:** 2025-02-21 03:49:02 UTC | |||||
**作者:** SisMaker | |||||
**项目版本:** 1.0.0 | |||||
## 项目概述 | |||||
本项目是一个基于Erlang开发的智能斗地主游戏系统,集成了深度学习、并行计算、性能监控和可视化分析等先进功能。系统采用模块化设计,具有高可扩展性和可维护性。 | |||||
## 系统架构 | |||||
### 核心模块 | |||||
1. **游戏核心模块** | |||||
- cards.erl: 牌类操作 | |||||
- card_rules.erl: 游戏规则 | |||||
- game_server.erl: 游戏服务器 | |||||
- player.erl: 玩家管理 | |||||
2. **AI系统模块** | |||||
- deep_learning.erl: 深度学习引擎 | |||||
- advanced_ai_player.erl: 高级AI玩家 | |||||
- matrix.erl: 矩阵运算 | |||||
- optimizer.erl: 优化器 | |||||
3. **系统支持模块** | |||||
- parallel_compute.erl: 并行计算 | |||||
- performance_monitor.erl: 性能监控 | |||||
- visualization.erl: 可视化分析 | |||||
## 功能特性 | |||||
### 1. 基础游戏功能 | |||||
- 完整的斗地主规则实现 | |||||
- 多人游戏支持 | |||||
- 房间管理系统 | |||||
- 积分系统 | |||||
### 2. AI系统 | |||||
#### 2.1 深度学习功能 | |||||
- 多层神经网络 | |||||
- 多种优化器(Adam, SGD) | |||||
- 实时学习能力 | |||||
- 策略适应 | |||||
#### 2.2 AI玩家特性 | |||||
- 多种性格特征(激进、保守、平衡、自适应) | |||||
- 动态决策系统 | |||||
- 对手模式识别 | |||||
- 自适应学习 | |||||
### 3. 系统性能 | |||||
#### 3.1 并行计算 | |||||
- 工作进程池管理 | |||||
- 负载均衡 | |||||
- 异步处理 | |||||
- 结果聚合 | |||||
#### 3.2 性能监控 | |||||
- 实时性能指标收集 | |||||
- 自动化性能分析 | |||||
- 告警系统 | |||||
- 性能报告生成 | |||||
### 4. 可视化分析 | |||||
- 多种图表类型支持 | |||||
- 实时数据更新 | |||||
- 多格式导出 | |||||
- 自定义显示选项 | |||||
## 技术实现 | |||||
### 1. 深度学习实现 | |||||
```erlang | |||||
% 示例:创建神经网络 | |||||
NetworkConfig = [64, 128, 64, 32], | |||||
{ok, Network} = deep_learning:create_network(NetworkConfig). | |||||
``` | |||||
### 2. 并行处理 | |||||
```erlang | |||||
% 示例:并行预测 | |||||
Inputs = [Input1, Input2, Input3], | |||||
{ok, Results} = parallel_compute:parallel_predict(Inputs, Network). | |||||
``` | |||||
### 3. 性能监控 | |||||
```erlang | |||||
% 示例:启动监控 | |||||
{ok, MonitorId} = performance_monitor:start_monitoring(Network). | |||||
``` | |||||
## 系统要求 | |||||
- Erlang/OTP 21+ | |||||
- 支持并行计算的多核系统 | |||||
- 足够的内存支持深度学习运算 | |||||
- 图形库支持(用于可视化) | |||||
## 性能指标 | |||||
- 支持同时运行多个游戏房间 | |||||
- AI决策响应时间 < 1秒 | |||||
- 支持实时性能监控和分析 | |||||
- 可扩展到分布式系统 | |||||
## 已实现功能列表 | |||||
### 游戏核心功能 | |||||
- [x] 完整的斗地主规则实现 | |||||
- [x] 多玩家支持 | |||||
- [x] 房间管理 | |||||
- [x] 积分系统 | |||||
### AI功能 | |||||
- [x] 深度学习引擎 | |||||
- [x] 多种AI性格 | |||||
- [x] 自适应学习 | |||||
- [x] 策略优化 | |||||
### 系统功能 | |||||
- [x] 并行计算 | |||||
- [x] 性能监控 | |||||
- [x] 可视化分析 | |||||
- [x] 实时数据处理 | |||||
## 待优化功能 | |||||
1. 分布式系统支持 | |||||
2. 数据持久化 | |||||
3. 更多AI算法 | |||||
4. Web界面 | |||||
5. 移动端支持 | |||||
6. 安全性增强 | |||||
7. 容错机制 | |||||
8. 日志系统 | |||||
## 使用说明 | |||||
### 1. 启动系统 | |||||
```erlang | |||||
% 编译所有模块 | |||||
c(matrix). | |||||
c(optimizer). | |||||
c(deep_learning). | |||||
c(parallel_compute). | |||||
c(performance_monitor). | |||||
c(visualization). | |||||
c(ai_test). | |||||
% 运行测试 | |||||
ai_test:run_test(). | |||||
``` | |||||
### 2. 创建游戏房间 | |||||
```erlang | |||||
{ok, RoomId} = room_manager:create_room("新手房", PlayerPid). | |||||
``` | |||||
### 3. 添加AI玩家 | |||||
```erlang | |||||
{ok, AiPlayer} = advanced_ai_player:start_link("AI_Player", aggressive). | |||||
``` | |||||
## 错误处理 | |||||
系统实现了基本的错误处理机制: | |||||
- 游戏异常处理 | |||||
- AI系统容错 | |||||
- 并行计算错误恢复 | |||||
- 性能监控告警 | |||||
## 维护建议 | |||||
1. 定期检查性能监控报告 | |||||
2. 更新AI模型训练数据 | |||||
3. 优化并行计算配置 | |||||
4. 备份系统数据 | |||||
## 联系方式 | |||||
- 作者:SisMaker | |||||
- 文档最后更新:2025-02-21 03:49:02 UTC | |||||
## 版权信息 | |||||
版权所有 © 2025 SisMaker。保留所有权利。 | |||||
--- | |||||
本文档详细描述了斗地主AI系统的架构、功能和实现细节,为系统的使用、维护和进一步开发提供了参考。如有任何问题或建议,请联系作者。 |