%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Example of computation of minimax (and Nash) strategies for % bimatrix games. Note that it only computes one minimax % solution. How would you modify the program to have it compute % all the minimax solutions? % % Author: K. Passino % Version: 2/5/02 % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clear all % Set the number of different possible values of the decision variables m=5; % If change will need to modify specific J1 value chosen below n=3; % Set the payoff matrices J1(i,j) and J2(i,j): % One set of payoff matrices that gives two Nash equilibria (1,3) and (4,1) is as follows. % Note that for these payoff matrices the minimax strategy is not a Nash solution. J1 =[-1 5 -3; -2 5 1; 4 3 -2; -5 -1 5; 3 0 2] J2 =[-1 2 -3; 2 -3 1; -2 3 1; -1 1 -1; 4 -4 1] % For generating random bimatrix games: J1=round(10*rand(m,n)-5*ones(m,n)) % Make it random integers between -5 and +5 J2=round(10*rand(m,n)-5*ones(m,n)) % Make it random integers between -5 and +5 % These can be used to find other equilibria. For instance, if you use the following two % payoff matrices then you will get one (admissible) Nash equilibrium which is then the % minimax strategy also. This shows that for some games the minimax strategy % is quite good. J1 =[4 1 -4; -2 5 3; -3 2 -1; 4 4 4; -3 -5 2] J2 =[2 -1 0; -2 4 0; -3 0 -1; -3 3 4; -3 0 -5] % Compute the minimax strategy (the name "minimax" clearly arises from the computations involved in % each player trying to minimize its maximum loss): % First, compute the security strategy for player 1 [maxvals1,indexmax1]=max(J1'); % This is the max value for each row (note transpose) [minimaxvalue1,secstratP1]=min(maxvals1) % Display the security value of P1 and its security strategy % (P1 loses no more than minimaxvalue1, no matter what strategy % P2 uses) % Second, compute the security strategy for player 2 (note difference from computation of security % strategies for matrix games since both payoff matrices are "loss" matrices, and note that computation of the % minimax strategies is completely independent) [maxvals2,indexmax2]=max(J2); % This is the min value for each column (note no transpose) [minimaxvalue2,secstratP2]=min(maxvals2) % Display the security value of P2 and its security strategy % The outcome of the game will be (with a minimax strategy) outcome1=J1(secstratP1,secstratP2) outcome2=J2(secstratP1,secstratP2) % Compute the Nash equilibria: flag=0; % Flag for saying if there is no Nash equilibria for i=1:m for j=1:n if J1(i,j)<=min(J1(:,j)) & J2(i,j)<=min(J2(i,:)), % Conduct two inequality tests display('Nash equilibrium and outcome:') % If satisfied, then diplay solution i j J1(i,j) J2(i,j) flag=1; % Indicates that there was one Nash equilibrium (or more) end end end if flag==0 display('There were no Nash equilibria') end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Do another example: clear all % Define payoff (cost) functions for each player theta1=-4:0.05:4; % Ok, while we think of it as an infinite game computationally % we of course only study a finite number of points. m=length(theta1); theta2=-5:0.05:5; n=length(theta2); % A set of cost functions, J1 for player 1 and J2 for player 2 for ii=1:length(theta1) for jj=1:length(theta2) J1(ii,jj)=-2*(exp( (-(theta1(ii)-2)^2)/5 +(-4*(theta1(ii)*theta2(jj))/20) + ((-(theta2(jj)-3)^2)/2))); J2(ii,jj)=-1*(exp( (-(theta1(ii)-1)^2)/4 + (5*(theta1(ii)*theta2(jj))/10) + ((-(theta2(jj)+1)^2)/2))); end end % Next, compute the reaction curves for each player and will plot on top of J1 and J2 for j=1:length(theta2) [temp,t1]=min(J1(:,j)); % Find the min point on J1 with theta2 fixed, for all theta2 R1(j)=theta1(t1); % Compute the theta1 value that is the best reaction to each theta2 end for i=1:length(theta1) [temp,t2]=min(J2(i,:)); % Find the min point on J2 with theta1 fixed R2(i)=theta2(t2); % Compute the theta2 value that is the best reaction to each theta1 end % Compute the minimax strategy (the name "minimax" clearly arises from the computations involved in % each player trying to minimize its maximum loss): % First, compute the security strategy for player 1 [maxvals1,indexmax1]=max(J1'); % This is the max value for each row (note transpose) [minimaxvalue1,secstratP1]=min(maxvals1) % Display the security value of P1 and its security strategy % (P1 loses no more than minimaxvalue1, no matter what strategy % P2 uses) theta1(secstratP1) % Second, compute the security strategy for player 2 (note difference from computation of security % strategies for matrix games since both payoff matrices are "loss" matrices, and note that computation of the % minimax strategies is completely independent) [maxvals2,indexmax2]=max(J2); % This is the min value for each column (note no transpose) [minimaxvalue2,secstratP2]=min(maxvals2) % Display the security value of P2 and its security strategy theta2(secstratP2) % The outcome of the game will be (with a minimax strategy) outcome1=J1(secstratP1,secstratP2) outcome2=J2(secstratP1,secstratP2) figure(1) clf contour(theta2,theta1,J1,10) hold on contour(theta2,theta1,J2,10) hold on plot(theta2,R1,'k-') hold on plot(R2,theta1,'k--') hold on plot(theta2(secstratP2),theta1(secstratP1),'x') xlabel('\theta_2') ylabel('\theta_1') title('J_1, J_2, R_1 (-), R_2 (--), "x" marks a minimax solution') hold off %------------------------------------- % End of program %-------------------------------------